id
stringlengths 9
10
| submitter
stringlengths 1
64
⌀ | authors
stringlengths 4
20.7k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
404
⌀ | doi
stringlengths 11
153
⌀ | report-no
stringlengths 2
254
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | orig_abstract
stringlengths 14
3.35k
| versions
listlengths 1
60
| update_date
stringlengths 10
10
| authors_parsed
listlengths 1
1.35k
| abstract
stringlengths 11
3.34k
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2406.11938
|
Hayden Helm
|
Hayden Helm and Brandon Duderstadt and Youngser Park and Carey E.
Priebe
|
Tracking the perspectives of interacting language models
| null | null | null | null |
cs.AI cs.MA
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Large language models (LLMs) are capable of producing high quality
information at unprecedented rates. As these models continue to entrench
themselves in society, the content they produce will become increasingly
pervasive in databases that are, in turn, incorporated into the pre-training
data, fine-tuning data, retrieval data, etc. of other language models. In this
paper we formalize the idea of a communication network of LLMs and introduce a
method for representing the perspective of individual models within a
collection of LLMs. Given these tools we systematically study information
diffusion in the communication network of LLMs in various simulated settings.
|
[
{
"created": "Mon, 17 Jun 2024 17:20:16 GMT",
"version": "v1"
}
] |
2024-06-19
|
[
[
"Helm",
"Hayden",
""
],
[
"Duderstadt",
"Brandon",
""
],
[
"Park",
"Youngser",
""
],
[
"Priebe",
"Carey E.",
""
]
] |
Large language models (LLMs) are capable of producing high quality information at unprecedented rates. As these models continue to entrench themselves in society, the content they produce will become increasingly pervasive in databases that are, in turn, incorporated into the pre-training data, fine-tuning data, retrieval data, etc. of other language models. In this paper we formalize the idea of a communication network of LLMs and introduce a method for representing the perspective of individual models within a collection of LLMs. Given these tools we systematically study information diffusion in the communication network of LLMs in various simulated settings.
|
1907.03356
|
Petr Ro\v{c}kai
|
Petr Ro\v{c}kai, Zuzana Baranov\'a, Jan Mr\'azek, Katar\'ina
Kejstov\'a, Ji\v{r}\'i Barnat
|
Reproducible Execution of POSIX Programs with DiOS
| null | null | null | null |
cs.OS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we describe DiOS, a lightweight model operating system which
can be used to execute programs that make use of POSIX APIs. Such executions
are fully reproducible: running the same program with the same inputs twice
will result in two exactly identical instruction traces, even if the program
uses threads for parallelism.
DiOS is implemented almost entirely in portable C and C++: although its
primary platform is DiVM, a verification-oriented virtual machine, it can be
configured to also run in KLEE, a symbolic executor. Finally, it can be
compiled into machine code to serve as a user-mode kernel.
Additionally, DiOS is modular and extensible. Its various components can be
combined to match both the capabilities of the underlying platform and to
provide services required by a particular program. New components can be added
to cover additional system calls or APIs.
The experimental evaluation has two parts. DiOS is first evaluated as a
component of a program verification platform based on DiVM. In the second part,
we consider its portability and modularity by combining it with the symbolic
executor KLEE.
|
[
{
"created": "Sun, 7 Jul 2019 22:26:02 GMT",
"version": "v1"
}
] |
2019-07-09
|
[
[
"Ročkai",
"Petr",
""
],
[
"Baranová",
"Zuzana",
""
],
[
"Mrázek",
"Jan",
""
],
[
"Kejstová",
"Katarína",
""
],
[
"Barnat",
"Jiří",
""
]
] |
In this paper, we describe DiOS, a lightweight model operating system which can be used to execute programs that make use of POSIX APIs. Such executions are fully reproducible: running the same program with the same inputs twice will result in two exactly identical instruction traces, even if the program uses threads for parallelism. DiOS is implemented almost entirely in portable C and C++: although its primary platform is DiVM, a verification-oriented virtual machine, it can be configured to also run in KLEE, a symbolic executor. Finally, it can be compiled into machine code to serve as a user-mode kernel. Additionally, DiOS is modular and extensible. Its various components can be combined to match both the capabilities of the underlying platform and to provide services required by a particular program. New components can be added to cover additional system calls or APIs. The experimental evaluation has two parts. DiOS is first evaluated as a component of a program verification platform based on DiVM. In the second part, we consider its portability and modularity by combining it with the symbolic executor KLEE.
|
2309.07382
|
Hayate Iso
|
Yunshu Wu, Hayate Iso, Pouya Pezeshkpour, Nikita Bhutani, Estevam
Hruschka
|
Less is More for Long Document Summary Evaluation by LLMs
|
EACL (main)
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Large Language Models (LLMs) have shown promising performance in summary
evaluation tasks, yet they face challenges such as high computational costs and
the Lost-in-the-Middle problem where important information in the middle of
long documents is often overlooked. To address these issues, this paper
introduces a novel approach, Extract-then-Evaluate, which involves extracting
key sentences from a long source document and then evaluating the summary by
prompting LLMs. The results reveal that the proposed method not only
significantly reduces evaluation costs but also exhibits a higher correlation
with human evaluations. Furthermore, we provide practical recommendations for
optimal document length and sentence extraction methods, contributing to the
development of cost-effective yet more accurate methods for LLM-based text
generation evaluation.
|
[
{
"created": "Thu, 14 Sep 2023 01:59:15 GMT",
"version": "v1"
},
{
"created": "Thu, 18 Jan 2024 18:23:37 GMT",
"version": "v2"
}
] |
2024-01-19
|
[
[
"Wu",
"Yunshu",
""
],
[
"Iso",
"Hayate",
""
],
[
"Pezeshkpour",
"Pouya",
""
],
[
"Bhutani",
"Nikita",
""
],
[
"Hruschka",
"Estevam",
""
]
] |
Large Language Models (LLMs) have shown promising performance in summary evaluation tasks, yet they face challenges such as high computational costs and the Lost-in-the-Middle problem where important information in the middle of long documents is often overlooked. To address these issues, this paper introduces a novel approach, Extract-then-Evaluate, which involves extracting key sentences from a long source document and then evaluating the summary by prompting LLMs. The results reveal that the proposed method not only significantly reduces evaluation costs but also exhibits a higher correlation with human evaluations. Furthermore, we provide practical recommendations for optimal document length and sentence extraction methods, contributing to the development of cost-effective yet more accurate methods for LLM-based text generation evaluation.
|
2403.08783
|
Fatma Shalabi
|
Fatma Shalabi, Huy H. Nguyen, Hichem Felouat, Ching-Chun Chang, and
Isao Echizen
|
Image-Text Out-Of-Context Detection Using Synthetic Multimodal
Misinformation
|
8 pages, 2 figures, conference
| null |
10.1109/APSIPAASC58517.2023.10317336
| null |
cs.CV cs.AI cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Misinformation has become a major challenge in the era of increasing digital
information, requiring the development of effective detection methods. We have
investigated a novel approach to Out-Of-Context detection (OOCD) that uses
synthetic data generation. We created a dataset specifically designed for OOCD
and developed an efficient detector for accurate classification. Our
experimental findings validate the use of synthetic data generation and
demonstrate its efficacy in addressing the data limitations associated with
OOCD. The dataset and detector should serve as valuable resources for future
research and the development of robust misinformation detection systems.
|
[
{
"created": "Mon, 29 Jan 2024 11:55:14 GMT",
"version": "v1"
}
] |
2024-03-15
|
[
[
"Shalabi",
"Fatma",
""
],
[
"Nguyen",
"Huy H.",
""
],
[
"Felouat",
"Hichem",
""
],
[
"Chang",
"Ching-Chun",
""
],
[
"Echizen",
"Isao",
""
]
] |
Misinformation has become a major challenge in the era of increasing digital information, requiring the development of effective detection methods. We have investigated a novel approach to Out-Of-Context detection (OOCD) that uses synthetic data generation. We created a dataset specifically designed for OOCD and developed an efficient detector for accurate classification. Our experimental findings validate the use of synthetic data generation and demonstrate its efficacy in addressing the data limitations associated with OOCD. The dataset and detector should serve as valuable resources for future research and the development of robust misinformation detection systems.
|
1111.3602
|
Davide Schipani
|
Michele Elia, Davide Schipani
|
On the Rabin signature
|
General revision; new section on blind signatures
| null | null | null |
cs.CR cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Some Rabin signature schemes may be exposed to forgery; several variants are
here described to counter this vulnerability. Blind Rabin signatures are also
discussed.
|
[
{
"created": "Thu, 10 Nov 2011 23:50:06 GMT",
"version": "v1"
},
{
"created": "Sat, 26 Nov 2011 17:04:16 GMT",
"version": "v2"
},
{
"created": "Sat, 17 Dec 2011 18:11:00 GMT",
"version": "v3"
}
] |
2011-12-20
|
[
[
"Elia",
"Michele",
""
],
[
"Schipani",
"Davide",
""
]
] |
Some Rabin signature schemes may be exposed to forgery; several variants are here described to counter this vulnerability. Blind Rabin signatures are also discussed.
|
1203.5830
|
Viet Hung Nguyen
|
Viet Hung Nguyen and Fabio Massacci
|
An Independent Validation of Vulnerability Discovery Models
|
This paper is to appear in ASIACCS'12
| null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Having a precise vulnerability discovery model (VDM) would provide a useful
quantitative insight to assess software security. Thus far, several models have
been proposed with some evidence supporting their goodness-of-fit.
In this work we describe an independent validation of the applicability of
six existing VDMs in seventeen releases of the three popular browsers Firefox,
Google Chrome and Internet Explorer. We have collected five different kinds of
data sets based on different definitions of a vulnerability. We introduce two
quantitative metrics, goodness-of-fit entropy and goodness-of-fit quality, to
analyze the impact of vulnerability data sets to the stability as well as
quality of VDMs in the software life cycles.
The experiment result shows that the "confirmed-by-vendors' advisories" data
sets apparently yields more stable and better results for VDMs. And the
performance of the s-shape logistic model (AML) seems to be superior
performance in overall. Meanwhile, Anderson thermodynamic model (AT) is indeed
not suitable for modeling the vulnerability discovery process. This means that
the discovery process of vulnerabilities and normal bugs are different because
the interests of people in finding security vulnerabilities are more than
finding normal programming bugs.
|
[
{
"created": "Mon, 26 Mar 2012 22:14:15 GMT",
"version": "v1"
}
] |
2012-03-28
|
[
[
"Nguyen",
"Viet Hung",
""
],
[
"Massacci",
"Fabio",
""
]
] |
Having a precise vulnerability discovery model (VDM) would provide a useful quantitative insight to assess software security. Thus far, several models have been proposed with some evidence supporting their goodness-of-fit. In this work we describe an independent validation of the applicability of six existing VDMs in seventeen releases of the three popular browsers Firefox, Google Chrome and Internet Explorer. We have collected five different kinds of data sets based on different definitions of a vulnerability. We introduce two quantitative metrics, goodness-of-fit entropy and goodness-of-fit quality, to analyze the impact of vulnerability data sets to the stability as well as quality of VDMs in the software life cycles. The experiment result shows that the "confirmed-by-vendors' advisories" data sets apparently yields more stable and better results for VDMs. And the performance of the s-shape logistic model (AML) seems to be superior performance in overall. Meanwhile, Anderson thermodynamic model (AT) is indeed not suitable for modeling the vulnerability discovery process. This means that the discovery process of vulnerabilities and normal bugs are different because the interests of people in finding security vulnerabilities are more than finding normal programming bugs.
|
2008.12132
|
Christina Uhl
|
Christina Uhl, Nadia Abou Nabout, Klaus Miller
|
How Much Ad Viewability is Enough? The Effect of Display Ad Viewability
on Advertising Effectiveness
| null | null | null | null |
cs.CY econ.GN q-fin.EC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A large share of all online display advertisements (ads) are never seen by a
human. For instance, an ad could appear below the page fold, where a user never
scrolls. Yet, an ad is essentially ineffective if it is not at least somewhat
viewable. Ad viewability - which refers to the pixel percentage-in-view and the
exposure duration of an online display ad - has recently garnered great
interest among digital advertisers and publishers. However, we know very little
about the impact of ad viewability on advertising effectiveness. We work to
close this gap by analyzing a large-scale observational data set with more than
350,000 ad impressions similar to the data sets that are typically available to
digital advertisers and publishers. This analysis reveals that longer exposure
durations (>10 seconds) and 100% visible pixels do not appear to be optimal in
generating view-throughs. The highest view-through rates seem to be generated
with relatively lower pixel/second-combinations of 50%/1, 50%/5, 75%/1, and
75%/5. However, this analysis does not account for user behavior that may be
correlated with or even drive ad viewability and may therefore result in
endogeneity issues. Consequently, we manipulated ad viewability in a randomized
online experiment for a major European news website, finding the highest ad
recognition rates among relatively higher pixel/second-combinations of 75%/10,
100%/5 and 100%/10. Everything below 75\% or 5 seconds performs worse. Yet, we
find that it may be sufficient to have either a long exposure duration or high
pixel percentage-in-view to reach high advertising effectiveness. Our results
provide guidance to advertisers enabling them to establish target viewability
rates more appropriately and to publishers who wish to differentiate their
viewability products.
|
[
{
"created": "Wed, 26 Aug 2020 05:49:57 GMT",
"version": "v1"
}
] |
2020-08-28
|
[
[
"Uhl",
"Christina",
""
],
[
"Nabout",
"Nadia Abou",
""
],
[
"Miller",
"Klaus",
""
]
] |
A large share of all online display advertisements (ads) are never seen by a human. For instance, an ad could appear below the page fold, where a user never scrolls. Yet, an ad is essentially ineffective if it is not at least somewhat viewable. Ad viewability - which refers to the pixel percentage-in-view and the exposure duration of an online display ad - has recently garnered great interest among digital advertisers and publishers. However, we know very little about the impact of ad viewability on advertising effectiveness. We work to close this gap by analyzing a large-scale observational data set with more than 350,000 ad impressions similar to the data sets that are typically available to digital advertisers and publishers. This analysis reveals that longer exposure durations (>10 seconds) and 100% visible pixels do not appear to be optimal in generating view-throughs. The highest view-through rates seem to be generated with relatively lower pixel/second-combinations of 50%/1, 50%/5, 75%/1, and 75%/5. However, this analysis does not account for user behavior that may be correlated with or even drive ad viewability and may therefore result in endogeneity issues. Consequently, we manipulated ad viewability in a randomized online experiment for a major European news website, finding the highest ad recognition rates among relatively higher pixel/second-combinations of 75%/10, 100%/5 and 100%/10. Everything below 75\% or 5 seconds performs worse. Yet, we find that it may be sufficient to have either a long exposure duration or high pixel percentage-in-view to reach high advertising effectiveness. Our results provide guidance to advertisers enabling them to establish target viewability rates more appropriately and to publishers who wish to differentiate their viewability products.
|
1912.10847
|
Preeti Sah
|
Preeti Sah and Ernest Fokou\'e
|
What do Asian Religions Have in Common? An Unsupervised Text Analytics
Exploration
|
18 pages, 22 figures
| null | null | null |
cs.CL cs.LG stat.AP
|
http://creativecommons.org/licenses/by/4.0/
|
The main source of various religious teachings is their sacred texts which
vary from religion to religion based on different factors like the geographical
location or time of the birth of a particular religion. Despite these
differences, there could be similarities between the sacred texts based on what
lessons it teaches to its followers. This paper attempts to find the similarity
using text mining techniques. The corpus consisting of Asian (Tao Te Ching,
Buddhism, Yogasutra, Upanishad) and non-Asian (four Bible texts) is used to
explore findings of similarity measures like Euclidean, Manhattan, Jaccard and
Cosine on raw Document Term Frequency [DTM], normalized DTM which reveals
similarity based on word usage. The performance of Supervised learning
algorithms like K-Nearest Neighbor [KNN], Support Vector Machine [SVM] and
Random Forest is measured based on its accuracy to predict correct scared text
for any given chapter in the corpus. The K-means clustering visualizations on
Euclidean distances of raw DTM reveals that there exists a pattern of
similarity among these sacred texts with Upanishads and Tao Te Ching is the
most similar text in the corpus.
|
[
{
"created": "Fri, 20 Dec 2019 18:28:29 GMT",
"version": "v1"
}
] |
2019-12-24
|
[
[
"Sah",
"Preeti",
""
],
[
"Fokoué",
"Ernest",
""
]
] |
The main source of various religious teachings is their sacred texts which vary from religion to religion based on different factors like the geographical location or time of the birth of a particular religion. Despite these differences, there could be similarities between the sacred texts based on what lessons it teaches to its followers. This paper attempts to find the similarity using text mining techniques. The corpus consisting of Asian (Tao Te Ching, Buddhism, Yogasutra, Upanishad) and non-Asian (four Bible texts) is used to explore findings of similarity measures like Euclidean, Manhattan, Jaccard and Cosine on raw Document Term Frequency [DTM], normalized DTM which reveals similarity based on word usage. The performance of Supervised learning algorithms like K-Nearest Neighbor [KNN], Support Vector Machine [SVM] and Random Forest is measured based on its accuracy to predict correct scared text for any given chapter in the corpus. The K-means clustering visualizations on Euclidean distances of raw DTM reveals that there exists a pattern of similarity among these sacred texts with Upanishads and Tao Te Ching is the most similar text in the corpus.
|
1905.00784
|
Aline Goeminne
|
Thomas Brihaye, V\'eronique Bruy\`ere, Aline Goeminne,
Jean-Fran\c{c}ois Raskin and Marie van den Bogaard
|
The Complexity of Subgame Perfect Equilibria in Quantitative
Reachability Games
| null |
Logical Methods in Computer Science, Volume 16, Issue 4 (November
5, 2020) lmcs:5966
|
10.23638/LMCS-16(4:8)2020
| null |
cs.GT cs.LO
|
http://creativecommons.org/licenses/by/4.0/
|
We study multiplayer quantitative reachability games played on a finite
directed graph, where the objective of each player is to reach his target set
of vertices as quickly as possible. Instead of the well-known notion of Nash
equilibrium (NE), we focus on the notion of subgame perfect equilibrium (SPE),
a refinement of NE well-suited in the framework of games played on graphs. It
is known that there always exists an SPE in quantitative reachability games and
that the constrained existence problem is decidable. We here prove that this
problem is PSPACE-complete. To obtain this result, we propose a new algorithm
that iteratively builds a set of constraints characterizing the set of SPE
outcomes in quantitative reachability games. This set of constraints is
obtained by iterating an operator that reinforces the constraints up to
obtaining a fixpoint. With this fixpoint, the set of SPE outcomes can be
represented by a finite graph of size at most exponential. A careful inspection
of the computation allows us to establish PSPACE membership.
|
[
{
"created": "Thu, 2 May 2019 14:47:15 GMT",
"version": "v1"
},
{
"created": "Fri, 3 May 2019 13:19:51 GMT",
"version": "v2"
},
{
"created": "Mon, 8 Jul 2019 13:04:43 GMT",
"version": "v3"
},
{
"created": "Wed, 4 Dec 2019 10:55:43 GMT",
"version": "v4"
},
{
"created": "Sat, 27 Jun 2020 17:51:24 GMT",
"version": "v5"
},
{
"created": "Tue, 13 Oct 2020 12:36:53 GMT",
"version": "v6"
},
{
"created": "Wed, 4 Nov 2020 17:54:25 GMT",
"version": "v7"
}
] |
2023-06-22
|
[
[
"Brihaye",
"Thomas",
""
],
[
"Bruyère",
"Véronique",
""
],
[
"Goeminne",
"Aline",
""
],
[
"Raskin",
"Jean-François",
""
],
[
"Bogaard",
"Marie van den",
""
]
] |
We study multiplayer quantitative reachability games played on a finite directed graph, where the objective of each player is to reach his target set of vertices as quickly as possible. Instead of the well-known notion of Nash equilibrium (NE), we focus on the notion of subgame perfect equilibrium (SPE), a refinement of NE well-suited in the framework of games played on graphs. It is known that there always exists an SPE in quantitative reachability games and that the constrained existence problem is decidable. We here prove that this problem is PSPACE-complete. To obtain this result, we propose a new algorithm that iteratively builds a set of constraints characterizing the set of SPE outcomes in quantitative reachability games. This set of constraints is obtained by iterating an operator that reinforces the constraints up to obtaining a fixpoint. With this fixpoint, the set of SPE outcomes can be represented by a finite graph of size at most exponential. A careful inspection of the computation allows us to establish PSPACE membership.
|
2204.07321
|
Chuang Liu
|
Chuang Liu, Yibing Zhan, Jia Wu, Chang Li, Bo Du, Wenbin Hu, Tongliang
Liu, Dacheng Tao
|
Graph Pooling for Graph Neural Networks: Progress, Challenges, and
Opportunities
|
11 pages, 2 figures. Accepted by IJCAI Survey Track 2023
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Graph neural networks have emerged as a leading architecture for many
graph-level tasks, such as graph classification and graph generation. As an
essential component of the architecture, graph pooling is indispensable for
obtaining a holistic graph-level representation of the whole graph. Although a
great variety of methods have been proposed in this promising and
fast-developing research field, to the best of our knowledge, little effort has
been made to systematically summarize these works. To set the stage for the
development of future works, in this paper, we attempt to fill this gap by
providing a broad review of recent methods for graph pooling. Specifically, 1)
we first propose a taxonomy of existing graph pooling methods with a
mathematical summary for each category; 2) then, we provide an overview of the
libraries related to graph pooling, including the commonly used datasets, model
architectures for downstream tasks, and open-source implementations; 3) next,
we further outline the applications that incorporate the idea of graph pooling
in a variety of domains; 4) finally, we discuss certain critical challenges
facing current studies and share our insights on future potential directions
for research on the improvement of graph pooling.
|
[
{
"created": "Fri, 15 Apr 2022 04:02:06 GMT",
"version": "v1"
},
{
"created": "Thu, 22 Jun 2023 15:00:07 GMT",
"version": "v2"
}
] |
2023-06-23
|
[
[
"Liu",
"Chuang",
""
],
[
"Zhan",
"Yibing",
""
],
[
"Wu",
"Jia",
""
],
[
"Li",
"Chang",
""
],
[
"Du",
"Bo",
""
],
[
"Hu",
"Wenbin",
""
],
[
"Liu",
"Tongliang",
""
],
[
"Tao",
"Dacheng",
""
]
] |
Graph neural networks have emerged as a leading architecture for many graph-level tasks, such as graph classification and graph generation. As an essential component of the architecture, graph pooling is indispensable for obtaining a holistic graph-level representation of the whole graph. Although a great variety of methods have been proposed in this promising and fast-developing research field, to the best of our knowledge, little effort has been made to systematically summarize these works. To set the stage for the development of future works, in this paper, we attempt to fill this gap by providing a broad review of recent methods for graph pooling. Specifically, 1) we first propose a taxonomy of existing graph pooling methods with a mathematical summary for each category; 2) then, we provide an overview of the libraries related to graph pooling, including the commonly used datasets, model architectures for downstream tasks, and open-source implementations; 3) next, we further outline the applications that incorporate the idea of graph pooling in a variety of domains; 4) finally, we discuss certain critical challenges facing current studies and share our insights on future potential directions for research on the improvement of graph pooling.
|
1804.01422
|
Jian Xu
|
Jian Xu, Chunheng Wang, Chengzuo Qi, Cunzhao Shi, and Baihua Xiao
|
Unsupervised Semantic-based Aggregation of Deep Convolutional Features
|
10 pages. arXiv admin note: text overlap with arXiv:1705.01247
| null |
10.1109/TIP.2018.2867104
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we propose a simple but effective semantic-based aggregation
(SBA) method. The proposed SBA utilizes the discriminative filters of deep
convolutional layers as semantic detectors. Moreover, we propose the effective
unsupervised strategy to select some semantic detectors to generate the
"probabilistic proposals", which highlight certain discriminative pattern of
objects and suppress the noise of background. The final global SBA
representation could then be acquired by aggregating the regional
representations weighted by the selected "probabilistic proposals"
corresponding to various semantic content. Our unsupervised SBA is easy to
generalize and achieves excellent performance on various tasks. We conduct
comprehensive experiments and show that our unsupervised SBA outperforms the
state-of-the-art unsupervised and supervised aggregation methods on image
retrieval, place recognition and cloud classification.
|
[
{
"created": "Tue, 3 Apr 2018 07:43:05 GMT",
"version": "v1"
}
] |
2018-11-14
|
[
[
"Xu",
"Jian",
""
],
[
"Wang",
"Chunheng",
""
],
[
"Qi",
"Chengzuo",
""
],
[
"Shi",
"Cunzhao",
""
],
[
"Xiao",
"Baihua",
""
]
] |
In this paper, we propose a simple but effective semantic-based aggregation (SBA) method. The proposed SBA utilizes the discriminative filters of deep convolutional layers as semantic detectors. Moreover, we propose the effective unsupervised strategy to select some semantic detectors to generate the "probabilistic proposals", which highlight certain discriminative pattern of objects and suppress the noise of background. The final global SBA representation could then be acquired by aggregating the regional representations weighted by the selected "probabilistic proposals" corresponding to various semantic content. Our unsupervised SBA is easy to generalize and achieves excellent performance on various tasks. We conduct comprehensive experiments and show that our unsupervised SBA outperforms the state-of-the-art unsupervised and supervised aggregation methods on image retrieval, place recognition and cloud classification.
|
2012.00508
|
Rupert Mitchell
|
Rupert Mitchell, Jan Blumenkamp and Amanda Prorok
|
Gaussian Process Based Message Filtering for Robust Multi-Agent
Cooperation in the Presence of Adversarial Communication
| null | null | null | null |
cs.RO cs.AI cs.LG cs.MA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we consider the problem of providing robustness to adversarial
communication in multi-agent systems. Specifically, we propose a solution
towards robust cooperation, which enables the multi-agent system to maintain
high performance in the presence of anonymous non-cooperative agents that
communicate faulty, misleading or manipulative information. In pursuit of this
goal, we propose a communication architecture based on Graph Neural Networks
(GNNs), which is amenable to a novel Gaussian Process (GP)-based probabilistic
model characterizing the mutual information between the simultaneous
communications of different agents due to their physical proximity and relative
position. This model allows agents to locally compute approximate posterior
probabilities, or confidences, that any given one of their communication
partners is being truthful. These confidences can be used as weights in a
message filtering scheme, thereby suppressing the influence of suspicious
communication on the receiving agent's decisions. In order to assess the
efficacy of our method, we introduce a taxonomy of non-cooperative agents,
which distinguishes them by the amount of information available to them. We
demonstrate in two distinct experiments that our method performs well across
this taxonomy, outperforming alternative methods. For all but the best informed
adversaries, our filtering method is able to reduce the impact that
non-cooperative agents cause, reducing it to the point of negligibility, and
with negligible cost to performance in the absence of adversaries.
|
[
{
"created": "Tue, 1 Dec 2020 14:21:58 GMT",
"version": "v1"
}
] |
2020-12-02
|
[
[
"Mitchell",
"Rupert",
""
],
[
"Blumenkamp",
"Jan",
""
],
[
"Prorok",
"Amanda",
""
]
] |
In this paper, we consider the problem of providing robustness to adversarial communication in multi-agent systems. Specifically, we propose a solution towards robust cooperation, which enables the multi-agent system to maintain high performance in the presence of anonymous non-cooperative agents that communicate faulty, misleading or manipulative information. In pursuit of this goal, we propose a communication architecture based on Graph Neural Networks (GNNs), which is amenable to a novel Gaussian Process (GP)-based probabilistic model characterizing the mutual information between the simultaneous communications of different agents due to their physical proximity and relative position. This model allows agents to locally compute approximate posterior probabilities, or confidences, that any given one of their communication partners is being truthful. These confidences can be used as weights in a message filtering scheme, thereby suppressing the influence of suspicious communication on the receiving agent's decisions. In order to assess the efficacy of our method, we introduce a taxonomy of non-cooperative agents, which distinguishes them by the amount of information available to them. We demonstrate in two distinct experiments that our method performs well across this taxonomy, outperforming alternative methods. For all but the best informed adversaries, our filtering method is able to reduce the impact that non-cooperative agents cause, reducing it to the point of negligibility, and with negligible cost to performance in the absence of adversaries.
|
1412.2620
|
Gundram Leifert
|
G. Leifert, T. Strau{\ss}, T. Gr\"uning, R. Labahn (University of
Rostock)
|
Cells in Multidimensional Recurrent Neural Networks
| null |
Journal of Machine Learning Research 17 (2016) 1-37
| null | null |
cs.AI cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The transcription of handwritten text on images is one task in machine
learning and one solution to solve it is using multi-dimensional recurrent
neural networks (MDRNN) with connectionist temporal classification (CTC). The
RNNs can contain special units, the long short-term memory (LSTM) cells. They
are able to learn long term dependencies but they get unstable when the
dimension is chosen greater than one. We defined some useful and necessary
properties for the one-dimensional LSTM cell and extend them in the
multi-dimensional case. Thereby we introduce several new cells with better
stability. We present a method to design cells using the theory of linear shift
invariant systems. The new cells are compared to the LSTM cell on the IFN/ENIT
and Rimes database, where we can improve the recognition rate compared to the
LSTM cell. So each application where the LSTM cells in MDRNNs are used could be
improved by substituting them by the new developed cells.
|
[
{
"created": "Mon, 8 Dec 2014 15:47:45 GMT",
"version": "v1"
},
{
"created": "Tue, 16 Feb 2016 12:26:37 GMT",
"version": "v2"
}
] |
2019-08-28
|
[
[
"Leifert",
"G.",
"",
"University of\n Rostock"
],
[
"Strauß",
"T.",
"",
"University of\n Rostock"
],
[
"Grüning",
"T.",
"",
"University of\n Rostock"
],
[
"Labahn",
"R.",
"",
"University of\n Rostock"
]
] |
The transcription of handwritten text on images is one task in machine learning and one solution to solve it is using multi-dimensional recurrent neural networks (MDRNN) with connectionist temporal classification (CTC). The RNNs can contain special units, the long short-term memory (LSTM) cells. They are able to learn long term dependencies but they get unstable when the dimension is chosen greater than one. We defined some useful and necessary properties for the one-dimensional LSTM cell and extend them in the multi-dimensional case. Thereby we introduce several new cells with better stability. We present a method to design cells using the theory of linear shift invariant systems. The new cells are compared to the LSTM cell on the IFN/ENIT and Rimes database, where we can improve the recognition rate compared to the LSTM cell. So each application where the LSTM cells in MDRNNs are used could be improved by substituting them by the new developed cells.
|
2209.04687
|
Lantian Li Mr.
|
Lantian Li and Di Wang and Dong Wang
|
Pay Attention to Hard Trials
| null | null | null | null |
cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Performance of speaker recognition systems is evaluated on test trials.
Although as crucial as rulers for tailors, trials have not been carefully
treated so far, and most existing benchmarks compose trials by naive
cross-pairing. In this paper, we argue that the cross-pairing approach produces
overwhelming easy trials, which in turn leads to potential bias in system and
technique comparison. To solve the problem, we advocate more attention to hard
trials. We present an SVM-based approach to identifying hard trials and use it
to construct new evaluation sets for VoxCeleb1 and SITW. With the new sets, we
can re-evaluate the contribution of some recent technologies. The code and the
identified hard trials will be published online at http://project.cslt.org.
|
[
{
"created": "Sat, 10 Sep 2022 15:16:05 GMT",
"version": "v1"
}
] |
2022-09-13
|
[
[
"Li",
"Lantian",
""
],
[
"Wang",
"Di",
""
],
[
"Wang",
"Dong",
""
]
] |
Performance of speaker recognition systems is evaluated on test trials. Although as crucial as rulers for tailors, trials have not been carefully treated so far, and most existing benchmarks compose trials by naive cross-pairing. In this paper, we argue that the cross-pairing approach produces overwhelming easy trials, which in turn leads to potential bias in system and technique comparison. To solve the problem, we advocate more attention to hard trials. We present an SVM-based approach to identifying hard trials and use it to construct new evaluation sets for VoxCeleb1 and SITW. With the new sets, we can re-evaluate the contribution of some recent technologies. The code and the identified hard trials will be published online at http://project.cslt.org.
|
1811.05013
|
Ankesh Anand
|
Ankesh Anand, Eugene Belilovsky, Kyle Kastner, Hugo Larochelle, Aaron
Courville
|
Blindfold Baselines for Embodied QA
|
NIPS 2018 Visually-Grounded Interaction and Language (ViGilL)
Workshop
| null | null | null |
cs.CV cs.AI cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We explore blindfold (question-only) baselines for Embodied Question
Answering. The EmbodiedQA task requires an agent to answer a question by
intelligently navigating in a simulated environment, gathering necessary visual
information only through first-person vision before finally answering.
Consequently, a blindfold baseline which ignores the environment and visual
information is a degenerate solution, yet we show through our experiments on
the EQAv1 dataset that a simple question-only baseline achieves
state-of-the-art results on the EmbodiedQA task in all cases except when the
agent is spawned extremely close to the object.
|
[
{
"created": "Mon, 12 Nov 2018 21:45:41 GMT",
"version": "v1"
}
] |
2018-11-14
|
[
[
"Anand",
"Ankesh",
""
],
[
"Belilovsky",
"Eugene",
""
],
[
"Kastner",
"Kyle",
""
],
[
"Larochelle",
"Hugo",
""
],
[
"Courville",
"Aaron",
""
]
] |
We explore blindfold (question-only) baselines for Embodied Question Answering. The EmbodiedQA task requires an agent to answer a question by intelligently navigating in a simulated environment, gathering necessary visual information only through first-person vision before finally answering. Consequently, a blindfold baseline which ignores the environment and visual information is a degenerate solution, yet we show through our experiments on the EQAv1 dataset that a simple question-only baseline achieves state-of-the-art results on the EmbodiedQA task in all cases except when the agent is spawned extremely close to the object.
|
2408.02614
|
Nour Khezemi
|
Nour Khezemi, Sikandar Ejaza, Naouel Moha, Yann-Ga\"el Gu\'eh\'eneuc
|
Comparison of Code Quality and Best Practices in IoT and non-IoT
Software
| null | null | null | null |
cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
Context: IoT systems, networks of connected devices powered by software,
require studying software quality for maintenance. Despite extensive studies on
non-IoT software quality, research on IoT software quality is lacking. It is
uncertain if IoT and non-IoT systems software are comparable, hindering the
confident application of results and best practices gained on non-IoT systems.
Objective: Therefore, we compare the code quality of two equivalent sets of
IoT and non-IoT systems to determine whether there are similarities and
differences. We also collect and revisit software-engineering best practices in
non-IoT contexts to apply them to IoT.
Method: We design and apply a systematic method to select two sets of 94
non-IoT and IoT systems software from GitHub with comparable characteristics.
We compute quality metrics on the systems in these two sets and then analyse
and compare the metric values. We analyse in depth and provide specific
examples of IoT system's complexity and how it manifests in the codebases.
After the comparison, We systematically select and present a list of best
practices to address the observed difference between IoT and non-IoT code.
Results: Through a comparison of metrics, we conclude that software for IoT
systems is more complex, coupled, larger, less maintainable, and cohesive than
non-IoT systems. Several factors, such as integrating multiple hardware and
software components and managing data communication between them, contribute to
these differences. Considering these differences, we present a revisited best
practices list with approaches, tools, or techniques for developing IoT
systems. As example, applying modularity, and refactoring are best practices
for lowering the complexity.
Conclusion: Based on our work, researchers can now make an informed decision
using existing studies on the quality of non-IoT systems for IoT systems.
|
[
{
"created": "Mon, 5 Aug 2024 16:39:04 GMT",
"version": "v1"
}
] |
2024-08-06
|
[
[
"Khezemi",
"Nour",
""
],
[
"Ejaza",
"Sikandar",
""
],
[
"Moha",
"Naouel",
""
],
[
"Guéhéneuc",
"Yann-Gaël",
""
]
] |
Context: IoT systems, networks of connected devices powered by software, require studying software quality for maintenance. Despite extensive studies on non-IoT software quality, research on IoT software quality is lacking. It is uncertain if IoT and non-IoT systems software are comparable, hindering the confident application of results and best practices gained on non-IoT systems. Objective: Therefore, we compare the code quality of two equivalent sets of IoT and non-IoT systems to determine whether there are similarities and differences. We also collect and revisit software-engineering best practices in non-IoT contexts to apply them to IoT. Method: We design and apply a systematic method to select two sets of 94 non-IoT and IoT systems software from GitHub with comparable characteristics. We compute quality metrics on the systems in these two sets and then analyse and compare the metric values. We analyse in depth and provide specific examples of IoT system's complexity and how it manifests in the codebases. After the comparison, We systematically select and present a list of best practices to address the observed difference between IoT and non-IoT code. Results: Through a comparison of metrics, we conclude that software for IoT systems is more complex, coupled, larger, less maintainable, and cohesive than non-IoT systems. Several factors, such as integrating multiple hardware and software components and managing data communication between them, contribute to these differences. Considering these differences, we present a revisited best practices list with approaches, tools, or techniques for developing IoT systems. As example, applying modularity, and refactoring are best practices for lowering the complexity. Conclusion: Based on our work, researchers can now make an informed decision using existing studies on the quality of non-IoT systems for IoT systems.
|
2005.10876
|
Marco Toldo
|
Marco Toldo, Andrea Maracani, Umberto Michieli and Pietro Zanuttigh
|
Unsupervised Domain Adaptation in Semantic Segmentation: a Review
|
34 pages, 7 figures, 2 tables
| null | null | null |
cs.CV cs.LG eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The aim of this paper is to give an overview of the recent advancements in
the Unsupervised Domain Adaptation (UDA) of deep networks for semantic
segmentation. This task is attracting a wide interest, since semantic
segmentation models require a huge amount of labeled data and the lack of data
fitting specific requirements is the main limitation in the deployment of these
techniques. This problem has been recently explored and has rapidly grown with
a large number of ad-hoc approaches. This motivates us to build a comprehensive
overview of the proposed methodologies and to provide a clear categorization.
In this paper, we start by introducing the problem, its formulation and the
various scenarios that can be considered. Then, we introduce the different
levels at which adaptation strategies may be applied: namely, at the input
(image) level, at the internal features representation and at the output level.
Furthermore, we present a detailed overview of the literature in the field,
dividing previous methods based on the following (non mutually exclusive)
categories: adversarial learning, generative-based, analysis of the classifier
discrepancies, self-teaching, entropy minimization, curriculum learning and
multi-task learning. Novel research directions are also briefly introduced to
give a hint of interesting open problems in the field. Finally, a comparison of
the performance of the various methods in the widely used autonomous driving
scenario is presented.
|
[
{
"created": "Thu, 21 May 2020 20:10:38 GMT",
"version": "v1"
}
] |
2020-05-25
|
[
[
"Toldo",
"Marco",
""
],
[
"Maracani",
"Andrea",
""
],
[
"Michieli",
"Umberto",
""
],
[
"Zanuttigh",
"Pietro",
""
]
] |
The aim of this paper is to give an overview of the recent advancements in the Unsupervised Domain Adaptation (UDA) of deep networks for semantic segmentation. This task is attracting a wide interest, since semantic segmentation models require a huge amount of labeled data and the lack of data fitting specific requirements is the main limitation in the deployment of these techniques. This problem has been recently explored and has rapidly grown with a large number of ad-hoc approaches. This motivates us to build a comprehensive overview of the proposed methodologies and to provide a clear categorization. In this paper, we start by introducing the problem, its formulation and the various scenarios that can be considered. Then, we introduce the different levels at which adaptation strategies may be applied: namely, at the input (image) level, at the internal features representation and at the output level. Furthermore, we present a detailed overview of the literature in the field, dividing previous methods based on the following (non mutually exclusive) categories: adversarial learning, generative-based, analysis of the classifier discrepancies, self-teaching, entropy minimization, curriculum learning and multi-task learning. Novel research directions are also briefly introduced to give a hint of interesting open problems in the field. Finally, a comparison of the performance of the various methods in the widely used autonomous driving scenario is presented.
|
2010.04887
|
Forrest Davis
|
Forrest Davis and Marten van Schijndel
|
Discourse structure interacts with reference but not syntax in neural
language models
|
Proceedings of the 2020 Conference on Computational Natural Language
Learning (CoNLL 2020)
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Language models (LMs) trained on large quantities of text have been claimed
to acquire abstract linguistic representations. Our work tests the robustness
of these abstractions by focusing on the ability of LMs to learn interactions
between different linguistic representations. In particular, we utilized
stimuli from psycholinguistic studies showing that humans can condition
reference (i.e. coreference resolution) and syntactic processing on the same
discourse structure (implicit causality). We compared both transformer and long
short-term memory LMs to find that, contrary to humans, implicit causality only
influences LM behavior for reference, not syntax, despite model representations
that encode the necessary discourse information. Our results further suggest
that LM behavior can contradict not only learned representations of discourse
but also syntactic agreement, pointing to shortcomings of standard language
modeling.
|
[
{
"created": "Sat, 10 Oct 2020 03:14:00 GMT",
"version": "v1"
}
] |
2020-10-13
|
[
[
"Davis",
"Forrest",
""
],
[
"van Schijndel",
"Marten",
""
]
] |
Language models (LMs) trained on large quantities of text have been claimed to acquire abstract linguistic representations. Our work tests the robustness of these abstractions by focusing on the ability of LMs to learn interactions between different linguistic representations. In particular, we utilized stimuli from psycholinguistic studies showing that humans can condition reference (i.e. coreference resolution) and syntactic processing on the same discourse structure (implicit causality). We compared both transformer and long short-term memory LMs to find that, contrary to humans, implicit causality only influences LM behavior for reference, not syntax, despite model representations that encode the necessary discourse information. Our results further suggest that LM behavior can contradict not only learned representations of discourse but also syntactic agreement, pointing to shortcomings of standard language modeling.
|
2102.08327
|
Federico Fusco
|
Georgios Amanatidis, Federico Fusco, Philip Lazos, Stefano Leonardi,
Alberto Marchetti Spaccamela, Rebecca Reiffenh\"auser
|
Submodular Maximization subject to a Knapsack Constraint: Combinatorial
Algorithms with Near-optimal Adaptive Complexity
|
This version addresses a gap in the probabilistic analysis of the
approximation guarantees in the previous version of this work. We provide a
simple fix via a standard sampling routine while maintaining the same
approximation guarantees and complexity bounds. (formerly appeared as
arXiv:2007.05014v2 in error)
|
Proceedings of the 38th International Conference on Machine
Learning, PMLR 139:231-242, 2021
| null | null |
cs.DS cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Submodular maximization is a classic algorithmic problem with multiple
applications in data mining and machine learning; there, the growing need to
deal with massive instances motivates the design of algorithms balancing the
quality of the solution with applicability. For the latter, an important
measure is the adaptive complexity, which captures the number of sequential
rounds of parallel computation needed by an algorithm to terminate. In this
work we obtain the first constant factor approximation algorithm for
non-monotone submodular maximization subject to a knapsack constraint with
near-optimal $O(\log n)$ adaptive complexity. Low adaptivity by itself,
however, is not enough: a crucial feature to account for is represented by the
total number of function evaluations (or value queries). Our algorithm asks
$\tilde{O}(n^2)$ value queries, but can be modified to run with only
$\tilde{O}(n)$ instead, while retaining a low adaptive complexity of
$O(\log^2n)$. Besides the above improvement in adaptivity, this is also the
first combinatorial approach with sublinear adaptive complexity for the problem
and yields algorithms comparable to the state-of-the-art even for the special
cases of cardinality constraints or monotone objectives.
|
[
{
"created": "Tue, 16 Feb 2021 18:15:51 GMT",
"version": "v1"
},
{
"created": "Fri, 20 Oct 2023 08:22:31 GMT",
"version": "v2"
}
] |
2024-02-20
|
[
[
"Amanatidis",
"Georgios",
""
],
[
"Fusco",
"Federico",
""
],
[
"Lazos",
"Philip",
""
],
[
"Leonardi",
"Stefano",
""
],
[
"Spaccamela",
"Alberto Marchetti",
""
],
[
"Reiffenhäuser",
"Rebecca",
""
]
] |
Submodular maximization is a classic algorithmic problem with multiple applications in data mining and machine learning; there, the growing need to deal with massive instances motivates the design of algorithms balancing the quality of the solution with applicability. For the latter, an important measure is the adaptive complexity, which captures the number of sequential rounds of parallel computation needed by an algorithm to terminate. In this work we obtain the first constant factor approximation algorithm for non-monotone submodular maximization subject to a knapsack constraint with near-optimal $O(\log n)$ adaptive complexity. Low adaptivity by itself, however, is not enough: a crucial feature to account for is represented by the total number of function evaluations (or value queries). Our algorithm asks $\tilde{O}(n^2)$ value queries, but can be modified to run with only $\tilde{O}(n)$ instead, while retaining a low adaptive complexity of $O(\log^2n)$. Besides the above improvement in adaptivity, this is also the first combinatorial approach with sublinear adaptive complexity for the problem and yields algorithms comparable to the state-of-the-art even for the special cases of cardinality constraints or monotone objectives.
|
1612.06699
|
Pierre Sermanet
|
Pierre Sermanet, Kelvin Xu, Sergey Levine
|
Unsupervised Perceptual Rewards for Imitation Learning
| null | null | null | null |
cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Reward function design and exploration time are arguably the biggest
obstacles to the deployment of reinforcement learning (RL) agents in the real
world. In many real-world tasks, designing a reward function takes considerable
hand engineering and often requires additional sensors to be installed just to
measure whether the task has been executed successfully. Furthermore, many
interesting tasks consist of multiple implicit intermediate steps that must be
executed in sequence. Even when the final outcome can be measured, it does not
necessarily provide feedback on these intermediate steps. To address these
issues, we propose leveraging the abstraction power of intermediate visual
representations learned by deep models to quickly infer perceptual reward
functions from small numbers of demonstrations. We present a method that is
able to identify key intermediate steps of a task from only a handful of
demonstration sequences, and automatically identify the most discriminative
features for identifying these steps. This method makes use of the features in
a pre-trained deep model, but does not require any explicit specification of
sub-goals. The resulting reward functions can then be used by an RL agent to
learn to perform the task in real-world settings. To evaluate the learned
reward, we present qualitative results on two real-world tasks and a
quantitative evaluation against a human-designed reward function. We also show
that our method can be used to learn a real-world door opening skill using a
real robot, even when the demonstration used for reward learning is provided by
a human using their own hand. To our knowledge, these are the first results
showing that complex robotic manipulation skills can be learned directly and
without supervised labels from a video of a human performing the task.
Supplementary material and data are available at
https://sermanet.github.io/rewards
|
[
{
"created": "Tue, 20 Dec 2016 15:04:38 GMT",
"version": "v1"
},
{
"created": "Mon, 26 Dec 2016 13:47:34 GMT",
"version": "v2"
},
{
"created": "Mon, 12 Jun 2017 21:38:17 GMT",
"version": "v3"
}
] |
2017-06-14
|
[
[
"Sermanet",
"Pierre",
""
],
[
"Xu",
"Kelvin",
""
],
[
"Levine",
"Sergey",
""
]
] |
Reward function design and exploration time are arguably the biggest obstacles to the deployment of reinforcement learning (RL) agents in the real world. In many real-world tasks, designing a reward function takes considerable hand engineering and often requires additional sensors to be installed just to measure whether the task has been executed successfully. Furthermore, many interesting tasks consist of multiple implicit intermediate steps that must be executed in sequence. Even when the final outcome can be measured, it does not necessarily provide feedback on these intermediate steps. To address these issues, we propose leveraging the abstraction power of intermediate visual representations learned by deep models to quickly infer perceptual reward functions from small numbers of demonstrations. We present a method that is able to identify key intermediate steps of a task from only a handful of demonstration sequences, and automatically identify the most discriminative features for identifying these steps. This method makes use of the features in a pre-trained deep model, but does not require any explicit specification of sub-goals. The resulting reward functions can then be used by an RL agent to learn to perform the task in real-world settings. To evaluate the learned reward, we present qualitative results on two real-world tasks and a quantitative evaluation against a human-designed reward function. We also show that our method can be used to learn a real-world door opening skill using a real robot, even when the demonstration used for reward learning is provided by a human using their own hand. To our knowledge, these are the first results showing that complex robotic manipulation skills can be learned directly and without supervised labels from a video of a human performing the task. Supplementary material and data are available at https://sermanet.github.io/rewards
|
1903.10670
|
Xiaoxi Chelsy Xie
|
Xiaoxi Chelsy Xie, Isaac Johnson, Anne Gomez
|
Detecting and Gauging Impact on Wikipedia Page Views
| null | null |
10.1145/3308560.3316751
| null |
cs.CY
|
http://creativecommons.org/licenses/by/4.0/
|
Understanding how various external campaigns or events affect readership on
Wikipedia is important to efforts aimed at improving awareness and access to
its content. In this paper, we consider how to build time-series models aimed
at predicting page views on Wikipedia with the goal of detecting whether there
are significant changes to the existing trends. We test these models on two
different events: a video campaign aimed at increasing awareness of Hindi
Wikipedia in India and the page preview feature roll-out---a means of accessing
Wikipedia content without actually visiting the pages---on English and German
Wikipedia. Our models effectively estimate the impact of page preview roll-out,
but do not detect a significant change following the video campaign in India.
We also discuss the utility of other geographies or language editions for
predicting page views from a given area on a given language edition.
|
[
{
"created": "Tue, 26 Mar 2019 04:27:20 GMT",
"version": "v1"
}
] |
2019-03-27
|
[
[
"Xie",
"Xiaoxi Chelsy",
""
],
[
"Johnson",
"Isaac",
""
],
[
"Gomez",
"Anne",
""
]
] |
Understanding how various external campaigns or events affect readership on Wikipedia is important to efforts aimed at improving awareness and access to its content. In this paper, we consider how to build time-series models aimed at predicting page views on Wikipedia with the goal of detecting whether there are significant changes to the existing trends. We test these models on two different events: a video campaign aimed at increasing awareness of Hindi Wikipedia in India and the page preview feature roll-out---a means of accessing Wikipedia content without actually visiting the pages---on English and German Wikipedia. Our models effectively estimate the impact of page preview roll-out, but do not detect a significant change following the video campaign in India. We also discuss the utility of other geographies or language editions for predicting page views from a given area on a given language edition.
|
1210.6777
|
Miguel Rodrigues
|
Miguel Rodrigues
|
Multiple-antenna fading coherent channels with arbitrary inputs:
Characterization and optimization of the reliable information transmission
rate
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We investigate the constrained capacity of multiple-antenna fading coherent
channels, where the receiver knows the channel state but the transmitter knows
only the channel distribution, driven by arbitrary equiprobable discrete inputs
in a regime of high signal-to-noise ratio (${\sf snr}$). In particular, we
capitalize on intersections between information theory and estimation theory to
conceive expansions to the average minimum-mean squared error (MMSE) and the
average mutual information, which leads to an expansion of the constrained
capacity, that capture well their behavior in the asymptotic regime of high
${\sf snr}$. We use the expansions to study the constrained capacity of various
multiple-antenna fading coherent channels, including Rayleigh fading models,
Ricean fading models and antenna-correlated models. The analysis unveils in
detail the impact of the number of transmit and receive antennas, transmit and
receive antenna correlation, line-of-sight components and the geometry of the
signalling scheme on the reliable information transmission rate. We also use
the expansions to design key system elements, such as power allocation and
precoding schemes, as well as to design space-time signalling schemes for
multiple-antenna fading coherent channels. Simulations results demonstrate that
the expansions lead to very sharp designs.
|
[
{
"created": "Thu, 25 Oct 2012 09:57:31 GMT",
"version": "v1"
}
] |
2012-10-26
|
[
[
"Rodrigues",
"Miguel",
""
]
] |
We investigate the constrained capacity of multiple-antenna fading coherent channels, where the receiver knows the channel state but the transmitter knows only the channel distribution, driven by arbitrary equiprobable discrete inputs in a regime of high signal-to-noise ratio (${\sf snr}$). In particular, we capitalize on intersections between information theory and estimation theory to conceive expansions to the average minimum-mean squared error (MMSE) and the average mutual information, which leads to an expansion of the constrained capacity, that capture well their behavior in the asymptotic regime of high ${\sf snr}$. We use the expansions to study the constrained capacity of various multiple-antenna fading coherent channels, including Rayleigh fading models, Ricean fading models and antenna-correlated models. The analysis unveils in detail the impact of the number of transmit and receive antennas, transmit and receive antenna correlation, line-of-sight components and the geometry of the signalling scheme on the reliable information transmission rate. We also use the expansions to design key system elements, such as power allocation and precoding schemes, as well as to design space-time signalling schemes for multiple-antenna fading coherent channels. Simulations results demonstrate that the expansions lead to very sharp designs.
|
2012.08640
|
Gustau Camps-Valls
|
Jochem Verrelst, Juan Pablo Rivera, Anatoly Gitelson, Jesus Delegido,
Jos\'e Moreno, Gustau Camps-Valls
|
Spectral band selection for vegetation properties retrieval using
Gaussian processes regression
| null |
International Journal of Applied Earth Observation and
Geoinformation Volume 52, October 2016, Pages 554-567
|
10.1016/j.jag.2016.07.016
| null |
cs.CV eess.IV stat.AP
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
With current and upcoming imaging spectrometers, automated band analysis
techniques are needed to enable efficient identification of most informative
bands to facilitate optimized processing of spectral data into estimates of
biophysical variables. This paper introduces an automated spectral band
analysis tool (BAT) based on Gaussian processes regression (GPR) for the
spectral analysis of vegetation properties. The GPR-BAT procedure sequentially
backwards removes the least contributing band in the regression model for a
given variable until only one band is kept. GPR-BAT is implemented within the
framework of the free ARTMO's MLRA (machine learning regression algorithms)
toolbox, which is dedicated to the transforming of optical remote sensing
images into biophysical products. GPR-BAT allows (1) to identify the most
informative bands in relating spectral data to a biophysical variable, and (2)
to find the least number of bands that preserve optimized accurate predictions.
This study concludes that a wise band selection of hyperspectral data is
strictly required for optimal vegetation properties mapping.
|
[
{
"created": "Mon, 7 Dec 2020 09:28:33 GMT",
"version": "v1"
}
] |
2020-12-17
|
[
[
"Verrelst",
"Jochem",
""
],
[
"Rivera",
"Juan Pablo",
""
],
[
"Gitelson",
"Anatoly",
""
],
[
"Delegido",
"Jesus",
""
],
[
"Moreno",
"José",
""
],
[
"Camps-Valls",
"Gustau",
""
]
] |
With current and upcoming imaging spectrometers, automated band analysis techniques are needed to enable efficient identification of most informative bands to facilitate optimized processing of spectral data into estimates of biophysical variables. This paper introduces an automated spectral band analysis tool (BAT) based on Gaussian processes regression (GPR) for the spectral analysis of vegetation properties. The GPR-BAT procedure sequentially backwards removes the least contributing band in the regression model for a given variable until only one band is kept. GPR-BAT is implemented within the framework of the free ARTMO's MLRA (machine learning regression algorithms) toolbox, which is dedicated to the transforming of optical remote sensing images into biophysical products. GPR-BAT allows (1) to identify the most informative bands in relating spectral data to a biophysical variable, and (2) to find the least number of bands that preserve optimized accurate predictions. This study concludes that a wise band selection of hyperspectral data is strictly required for optimal vegetation properties mapping.
|
2312.11444
|
Graham Neubig
|
Syeda Nahida Akter, Zichun Yu, Aashiq Muhamed, Tianyue Ou, Alex
B\"auerle, \'Angel Alexander Cabrera, Krish Dholakia, Chenyan Xiong, Graham
Neubig
|
An In-depth Look at Gemini's Language Abilities
| null | null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The recently released Google Gemini class of models are the first to
comprehensively report results that rival the OpenAI GPT series across a wide
variety of tasks. In this paper, we do an in-depth exploration of Gemini's
language abilities, making two contributions. First, we provide a third-party,
objective comparison of the abilities of the OpenAI GPT and Google Gemini
models with reproducible code and fully transparent results. Second, we take a
closer look at the results, identifying areas where one of the two model
classes excels. We perform this analysis over 10 datasets testing a variety of
language abilities, including reasoning, answering knowledge-based questions,
solving math problems, translating between languages, generating code, and
acting as instruction-following agents. From this analysis, we find that Gemini
Pro achieves accuracy that is close but slightly inferior to the corresponding
GPT 3.5 Turbo on all tasks that we benchmarked. We further provide explanations
for some of this under-performance, including failures in mathematical
reasoning with many digits, sensitivity to multiple-choice answer ordering,
aggressive content filtering, and others. We also identify areas where Gemini
demonstrates comparably high performance, including generation into non-English
languages, and handling longer and more complex reasoning chains. Code and data
for reproduction can be found at https://github.com/neulab/gemini-benchmark
|
[
{
"created": "Mon, 18 Dec 2023 18:47:42 GMT",
"version": "v1"
},
{
"created": "Sun, 24 Dec 2023 12:25:10 GMT",
"version": "v2"
}
] |
2023-12-27
|
[
[
"Akter",
"Syeda Nahida",
""
],
[
"Yu",
"Zichun",
""
],
[
"Muhamed",
"Aashiq",
""
],
[
"Ou",
"Tianyue",
""
],
[
"Bäuerle",
"Alex",
""
],
[
"Cabrera",
"Ángel Alexander",
""
],
[
"Dholakia",
"Krish",
""
],
[
"Xiong",
"Chenyan",
""
],
[
"Neubig",
"Graham",
""
]
] |
The recently released Google Gemini class of models are the first to comprehensively report results that rival the OpenAI GPT series across a wide variety of tasks. In this paper, we do an in-depth exploration of Gemini's language abilities, making two contributions. First, we provide a third-party, objective comparison of the abilities of the OpenAI GPT and Google Gemini models with reproducible code and fully transparent results. Second, we take a closer look at the results, identifying areas where one of the two model classes excels. We perform this analysis over 10 datasets testing a variety of language abilities, including reasoning, answering knowledge-based questions, solving math problems, translating between languages, generating code, and acting as instruction-following agents. From this analysis, we find that Gemini Pro achieves accuracy that is close but slightly inferior to the corresponding GPT 3.5 Turbo on all tasks that we benchmarked. We further provide explanations for some of this under-performance, including failures in mathematical reasoning with many digits, sensitivity to multiple-choice answer ordering, aggressive content filtering, and others. We also identify areas where Gemini demonstrates comparably high performance, including generation into non-English languages, and handling longer and more complex reasoning chains. Code and data for reproduction can be found at https://github.com/neulab/gemini-benchmark
|
2101.07390
|
Vijay Vazirani
|
Vijay V. Vazirani
|
The General Graph Matching Game: Approximate Core
|
10 pages
| null | null | null |
cs.GT econ.TH math.CO
|
http://creativecommons.org/licenses/by/4.0/
|
The classic paper of Shapley and Shubik \cite{Shapley1971assignment}
characterized the core of the assignment game using ideas from matching theory
and LP-duality theory and their highly non-trivial interplay. Whereas the core
of this game is always non-empty, that of the general graph matching game can
be empty.
This paper salvages the situation by giving an imputation in the
$2/3$-approximate core for the latter. This bound is best possible, since it is
the integrality gap of the natural underlying LP. Our profit allocation method
goes further: the multiplier on the profit of an agent is often better than ${2
\over 3}$ and lies in the interval $[{2 \over 3}, 1]$, depending on how
severely constrained the agent is.
Next, we provide new insights showing how discerning core imputations of an
assignment games are by studying them via the lens of complementary slackness.
We present a relationship between the competitiveness of individuals and teams
of agents and the amount of profit they accrue in imputations that lie in the
core, where by {\em competitiveness} we mean whether an individual or a team is
matched in every/some/no maximum matching. This also sheds light on the
phenomenon of degeneracy in assignment games, i.e., when the maximum weight
matching is not unique.
The core is a quintessential solution concept in cooperative game theory. It
contains all ways of distributing the total worth of a game among agents in
such a way that no sub-coalition has incentive to secede from the grand
coalition. Our imputation, in the $2/3$-approximate core, implies that a
sub-coalition will gain at most a $3/2$ factor by seceding, and less in typical
cases.
|
[
{
"created": "Tue, 19 Jan 2021 00:53:22 GMT",
"version": "v1"
},
{
"created": "Thu, 21 Jan 2021 05:10:16 GMT",
"version": "v2"
},
{
"created": "Mon, 26 Apr 2021 12:08:16 GMT",
"version": "v3"
},
{
"created": "Fri, 16 Jul 2021 17:55:03 GMT",
"version": "v4"
}
] |
2021-07-19
|
[
[
"Vazirani",
"Vijay V.",
""
]
] |
The classic paper of Shapley and Shubik \cite{Shapley1971assignment} characterized the core of the assignment game using ideas from matching theory and LP-duality theory and their highly non-trivial interplay. Whereas the core of this game is always non-empty, that of the general graph matching game can be empty. This paper salvages the situation by giving an imputation in the $2/3$-approximate core for the latter. This bound is best possible, since it is the integrality gap of the natural underlying LP. Our profit allocation method goes further: the multiplier on the profit of an agent is often better than ${2 \over 3}$ and lies in the interval $[{2 \over 3}, 1]$, depending on how severely constrained the agent is. Next, we provide new insights showing how discerning core imputations of an assignment games are by studying them via the lens of complementary slackness. We present a relationship between the competitiveness of individuals and teams of agents and the amount of profit they accrue in imputations that lie in the core, where by {\em competitiveness} we mean whether an individual or a team is matched in every/some/no maximum matching. This also sheds light on the phenomenon of degeneracy in assignment games, i.e., when the maximum weight matching is not unique. The core is a quintessential solution concept in cooperative game theory. It contains all ways of distributing the total worth of a game among agents in such a way that no sub-coalition has incentive to secede from the grand coalition. Our imputation, in the $2/3$-approximate core, implies that a sub-coalition will gain at most a $3/2$ factor by seceding, and less in typical cases.
|
1611.06605
|
Haris Angelidakis
|
Haris Angelidakis, Yury Makarychev and Vsevolod Oparin
|
Algorithmic and Hardness Results for the Hub Labeling Problem
|
To appear in SODA17
| null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
There has been significant success in designing highly efficient algorithms
for distance and shortest-path queries in recent years; many of the
state-of-the-art algorithms use the hub labeling framework. In this paper, we
study the approximability of the Hub Labeling problem. We prove a hardness of
$\Omega(\log n)$ for Hub Labeling, matching known approximation guarantees. The
hardness result applies to graphs that have multiple shortest paths between
some pairs of vertices. No hardness of approximation results were known
previously.
Then, we focus on graphs that have a unique shortest path between each pair
of vertices. This is a very natural family of graphs, and much research on the
Hub Labeling problem has studied such graphs. We give an $O(\log D)$
approximation algorithm for graphs of diameter $D$ with unique shortest paths.
In particular, we get an $O(\log \log n)$ approximation for graphs of
polylogarithmic diameter, while previously known algorithms gave an $O(\log n)$
proximation. Finally, we present a polynomial-time approximation scheme (PTAS)
and quasi-polynomial time algorithms for Hub Labeling on trees; additionally,
we analyze a simple combinatorial heuristic for Hub Labeling on trees, proposed
by Peleg in 2000. We show that this heuristic gives an approximation factor of
2.
|
[
{
"created": "Sun, 20 Nov 2016 22:44:12 GMT",
"version": "v1"
}
] |
2016-11-22
|
[
[
"Angelidakis",
"Haris",
""
],
[
"Makarychev",
"Yury",
""
],
[
"Oparin",
"Vsevolod",
""
]
] |
There has been significant success in designing highly efficient algorithms for distance and shortest-path queries in recent years; many of the state-of-the-art algorithms use the hub labeling framework. In this paper, we study the approximability of the Hub Labeling problem. We prove a hardness of $\Omega(\log n)$ for Hub Labeling, matching known approximation guarantees. The hardness result applies to graphs that have multiple shortest paths between some pairs of vertices. No hardness of approximation results were known previously. Then, we focus on graphs that have a unique shortest path between each pair of vertices. This is a very natural family of graphs, and much research on the Hub Labeling problem has studied such graphs. We give an $O(\log D)$ approximation algorithm for graphs of diameter $D$ with unique shortest paths. In particular, we get an $O(\log \log n)$ approximation for graphs of polylogarithmic diameter, while previously known algorithms gave an $O(\log n)$ proximation. Finally, we present a polynomial-time approximation scheme (PTAS) and quasi-polynomial time algorithms for Hub Labeling on trees; additionally, we analyze a simple combinatorial heuristic for Hub Labeling on trees, proposed by Peleg in 2000. We show that this heuristic gives an approximation factor of 2.
|
1204.0067
|
Shuqing Zeng
|
Shuqing Zeng
|
Estimating Rigid Transformation Between Two Range Maps Using Expectation
Maximization Algorithm
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by-nc-sa/3.0/
|
We address the problem of estimating a rigid transformation between two point
sets, which is a key module for target tracking system using Light Detection
And Ranging (LiDAR). A fast implementation of Expectation-maximization (EM)
algorithm is presented whose complexity is O(N) with $N$ the number of scan
points.
|
[
{
"created": "Sat, 31 Mar 2012 03:20:02 GMT",
"version": "v1"
}
] |
2012-04-03
|
[
[
"Zeng",
"Shuqing",
""
]
] |
We address the problem of estimating a rigid transformation between two point sets, which is a key module for target tracking system using Light Detection And Ranging (LiDAR). A fast implementation of Expectation-maximization (EM) algorithm is presented whose complexity is O(N) with $N$ the number of scan points.
|
2302.07072
|
Mengxiao Zhang
|
Fengjuan Jia, Mengxiao Zhang, Jiamou Liu, Bakh Khoussainov
|
Differentially Private Diffusion Auction: The Single-unit Case
| null | null | null | null |
cs.GT
|
http://creativecommons.org/licenses/by/4.0/
|
Diffusion auction refers to an emerging paradigm of online marketplace where
an auctioneer utilises a social network to attract potential buyers. Diffusion
auction poses significant privacy risks. From the auction outcome, it is
possible to infer hidden, and potentially sensitive, preferences of buyers. To
mitigate such risks, we initiate the study of differential privacy (DP) in
diffusion auction mechanisms. DP is a well-established notion of privacy that
protects a system against inference attacks. Achieving DP in diffusion auctions
is non-trivial as the well-designed auction rules are required to incentivise
the buyers to truthfully report their neighbourhood. We study the single-unit
case and design two differentially private diffusion mechanisms (DPDMs):
recursive DPDM and layered DPDM. We prove that these mechanisms guarantee
differential privacy, incentive compatibility and individual rationality for
both valuations and neighbourhood. We then empirically compare their
performance on real and synthetic datasets.
|
[
{
"created": "Tue, 14 Feb 2023 14:35:45 GMT",
"version": "v1"
},
{
"created": "Thu, 16 Feb 2023 15:08:16 GMT",
"version": "v2"
}
] |
2023-02-17
|
[
[
"Jia",
"Fengjuan",
""
],
[
"Zhang",
"Mengxiao",
""
],
[
"Liu",
"Jiamou",
""
],
[
"Khoussainov",
"Bakh",
""
]
] |
Diffusion auction refers to an emerging paradigm of online marketplace where an auctioneer utilises a social network to attract potential buyers. Diffusion auction poses significant privacy risks. From the auction outcome, it is possible to infer hidden, and potentially sensitive, preferences of buyers. To mitigate such risks, we initiate the study of differential privacy (DP) in diffusion auction mechanisms. DP is a well-established notion of privacy that protects a system against inference attacks. Achieving DP in diffusion auctions is non-trivial as the well-designed auction rules are required to incentivise the buyers to truthfully report their neighbourhood. We study the single-unit case and design two differentially private diffusion mechanisms (DPDMs): recursive DPDM and layered DPDM. We prove that these mechanisms guarantee differential privacy, incentive compatibility and individual rationality for both valuations and neighbourhood. We then empirically compare their performance on real and synthetic datasets.
|
2401.09854
|
Samira Afzal
|
Samira Afzal, Narges Mehran, Zoha Azimi Ourimi, Farzad Tashtarian,
Hadi Amirpour, Radu Prodan, Christian Timmerer
|
A Survey on Energy Consumption and Environmental Impact of Video
Streaming
| null | null | null | null |
cs.MM
|
http://creativecommons.org/licenses/by/4.0/
|
Climate change challenges require a notable decrease in worldwide greenhouse
gas (GHG) emissions across technology sectors. Digital technologies, especially
video streaming, accounting for most Internet traffic, make no exception. Video
streaming demand increases with remote working, multimedia communication
services (e.g., WhatsApp, Skype), video streaming content (e.g., YouTube,
Netflix), video resolution (4K/8K, 50 fps/60 fps), and multi-view video, making
energy consumption and environmental footprint critical. This survey
contributes to a better understanding of sustainable and efficient video
streaming technologies by providing insights into the state-of-the-art and
potential future directions for researchers, developers, and engineers, service
providers, hosting platforms, and consumers. We widen this survey's focus on
content provisioning and content consumption based on the observation that
continuously active network equipment underneath video streaming consumes
substantial energy independent of the transmitted data type. We propose a
taxonomy of factors that affect the energy consumption in video streaming, such
as encoding schemes, resource requirements, storage, content retrieval,
decoding, and display. We identify notable weaknesses in video streaming that
require further research for improved energy efficiency: (1) fixed bitrate
ladders in HTTP live streaming; (2) inefficient hardware utilization of
existing video players; (3) lack of comprehensive open energy measurement
dataset covering various device types and coding parameters for reproducible
research.
|
[
{
"created": "Thu, 18 Jan 2024 10:10:25 GMT",
"version": "v1"
}
] |
2024-01-19
|
[
[
"Afzal",
"Samira",
""
],
[
"Mehran",
"Narges",
""
],
[
"Ourimi",
"Zoha Azimi",
""
],
[
"Tashtarian",
"Farzad",
""
],
[
"Amirpour",
"Hadi",
""
],
[
"Prodan",
"Radu",
""
],
[
"Timmerer",
"Christian",
""
]
] |
Climate change challenges require a notable decrease in worldwide greenhouse gas (GHG) emissions across technology sectors. Digital technologies, especially video streaming, accounting for most Internet traffic, make no exception. Video streaming demand increases with remote working, multimedia communication services (e.g., WhatsApp, Skype), video streaming content (e.g., YouTube, Netflix), video resolution (4K/8K, 50 fps/60 fps), and multi-view video, making energy consumption and environmental footprint critical. This survey contributes to a better understanding of sustainable and efficient video streaming technologies by providing insights into the state-of-the-art and potential future directions for researchers, developers, and engineers, service providers, hosting platforms, and consumers. We widen this survey's focus on content provisioning and content consumption based on the observation that continuously active network equipment underneath video streaming consumes substantial energy independent of the transmitted data type. We propose a taxonomy of factors that affect the energy consumption in video streaming, such as encoding schemes, resource requirements, storage, content retrieval, decoding, and display. We identify notable weaknesses in video streaming that require further research for improved energy efficiency: (1) fixed bitrate ladders in HTTP live streaming; (2) inefficient hardware utilization of existing video players; (3) lack of comprehensive open energy measurement dataset covering various device types and coding parameters for reproducible research.
|
1707.09700
|
Yikang Li
|
Yikang Li, Wanli Ouyang, Bolei Zhou, Kun Wang, Xiaogang Wang
|
Scene Graph Generation from Objects, Phrases and Region Captions
|
accepted by ICCV 2017
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Object detection, scene graph generation and region captioning, which are
three scene understanding tasks at different semantic levels, are tied
together: scene graphs are generated on top of objects detected in an image
with their pairwise relationship predicted, while region captioning gives a
language description of the objects, their attributes, relations, and other
context information. In this work, to leverage the mutual connections across
semantic levels, we propose a novel neural network model, termed as Multi-level
Scene Description Network (denoted as MSDN), to solve the three vision tasks
jointly in an end-to-end manner. Objects, phrases, and caption regions are
first aligned with a dynamic graph based on their spatial and semantic
connections. Then a feature refining structure is used to pass messages across
the three levels of semantic tasks through the graph. We benchmark the learned
model on three tasks, and show the joint learning across three tasks with our
proposed method can bring mutual improvements over previous models.
Particularly, on the scene graph generation task, our proposed method
outperforms the state-of-art method with more than 3% margin.
|
[
{
"created": "Mon, 31 Jul 2017 02:40:19 GMT",
"version": "v1"
},
{
"created": "Fri, 15 Sep 2017 05:05:29 GMT",
"version": "v2"
}
] |
2017-09-18
|
[
[
"Li",
"Yikang",
""
],
[
"Ouyang",
"Wanli",
""
],
[
"Zhou",
"Bolei",
""
],
[
"Wang",
"Kun",
""
],
[
"Wang",
"Xiaogang",
""
]
] |
Object detection, scene graph generation and region captioning, which are three scene understanding tasks at different semantic levels, are tied together: scene graphs are generated on top of objects detected in an image with their pairwise relationship predicted, while region captioning gives a language description of the objects, their attributes, relations, and other context information. In this work, to leverage the mutual connections across semantic levels, we propose a novel neural network model, termed as Multi-level Scene Description Network (denoted as MSDN), to solve the three vision tasks jointly in an end-to-end manner. Objects, phrases, and caption regions are first aligned with a dynamic graph based on their spatial and semantic connections. Then a feature refining structure is used to pass messages across the three levels of semantic tasks through the graph. We benchmark the learned model on three tasks, and show the joint learning across three tasks with our proposed method can bring mutual improvements over previous models. Particularly, on the scene graph generation task, our proposed method outperforms the state-of-art method with more than 3% margin.
|
1506.00307
|
Emad Soroush
|
Emad Soroush, Magdalena Balazinska, Simon Krughoff, Andrew Connolly
|
Efficient Iterative Processing in the SciDB Parallel Array Engine
| null | null | null | null |
cs.DB
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Many scientific data-intensive applications perform iterative computations on
array data. There exist multiple engines specialized for array processing.
These engines efficiently support various types of operations, but none
includes native support for iterative processing. In this paper, we develop a
model for iterative array computations and a series of optimizations. We
evaluate the benefits of an optimized, native support for iterative array
processing on the SciDB engine and real workloads from the astronomy domain.
|
[
{
"created": "Sun, 31 May 2015 23:37:58 GMT",
"version": "v1"
}
] |
2015-06-02
|
[
[
"Soroush",
"Emad",
""
],
[
"Balazinska",
"Magdalena",
""
],
[
"Krughoff",
"Simon",
""
],
[
"Connolly",
"Andrew",
""
]
] |
Many scientific data-intensive applications perform iterative computations on array data. There exist multiple engines specialized for array processing. These engines efficiently support various types of operations, but none includes native support for iterative processing. In this paper, we develop a model for iterative array computations and a series of optimizations. We evaluate the benefits of an optimized, native support for iterative array processing on the SciDB engine and real workloads from the astronomy domain.
|
1909.04189
|
Sandeep Soni
|
Sandeep Soni, Kristina Lerman, Jacob Eisenstein
|
Follow the Leader: Documents on the Leading Edge of Semantic Change Get
More Citations
|
25 pages, 3 figures, To appear in the Journal of the Association of
Information Sciences and Technology
| null | null | null |
cs.CL cs.SI physics.soc-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Diachronic word embeddings -- vector representations of words over time --
offer remarkable insights into the evolution of language and provide a tool for
quantifying sociocultural change from text documents. Prior work has used such
embeddings to identify shifts in the meaning of individual words. However,
simply knowing that a word has changed in meaning is insufficient to identify
the instances of word usage that convey the historical or the newer meaning. In
this paper, we link diachronic word embeddings to documents, by situating those
documents as leaders or laggards with respect to ongoing semantic changes.
Specifically, we propose a novel method to quantify the degree of semantic
progressiveness in each word usage, and then show how these usages can be
aggregated to obtain scores for each document. We analyze two large collections
of documents, representing legal opinions and scientific articles. Documents
that are scored as semantically progressive receive a larger number of
citations, indicating that they are especially influential. Our work thus
provides a new technique for identifying lexical semantic leaders and
demonstrates a new link between progressive use of language and influence in a
citation network.
|
[
{
"created": "Mon, 9 Sep 2019 22:43:02 GMT",
"version": "v1"
},
{
"created": "Thu, 1 Oct 2020 19:41:11 GMT",
"version": "v2"
}
] |
2020-10-05
|
[
[
"Soni",
"Sandeep",
""
],
[
"Lerman",
"Kristina",
""
],
[
"Eisenstein",
"Jacob",
""
]
] |
Diachronic word embeddings -- vector representations of words over time -- offer remarkable insights into the evolution of language and provide a tool for quantifying sociocultural change from text documents. Prior work has used such embeddings to identify shifts in the meaning of individual words. However, simply knowing that a word has changed in meaning is insufficient to identify the instances of word usage that convey the historical or the newer meaning. In this paper, we link diachronic word embeddings to documents, by situating those documents as leaders or laggards with respect to ongoing semantic changes. Specifically, we propose a novel method to quantify the degree of semantic progressiveness in each word usage, and then show how these usages can be aggregated to obtain scores for each document. We analyze two large collections of documents, representing legal opinions and scientific articles. Documents that are scored as semantically progressive receive a larger number of citations, indicating that they are especially influential. Our work thus provides a new technique for identifying lexical semantic leaders and demonstrates a new link between progressive use of language and influence in a citation network.
|
1608.02676
|
Krishna Kumar Singh
|
Krishna Kumar Singh and Yong Jae Lee
|
End-to-End Localization and Ranking for Relative Attributes
|
Appears in European Conference on Computer Vision (ECCV), 2016
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose an end-to-end deep convolutional network to simultaneously
localize and rank relative visual attributes, given only weakly-supervised
pairwise image comparisons. Unlike previous methods, our network jointly learns
the attribute's features, localization, and ranker. The localization module of
our network discovers the most informative image region for the attribute,
which is then used by the ranking module to learn a ranking model of the
attribute. Our end-to-end framework also significantly speeds up processing and
is much faster than previous methods. We show state-of-the-art ranking results
on various relative attribute datasets, and our qualitative localization
results clearly demonstrate our network's ability to learn meaningful image
patches.
|
[
{
"created": "Tue, 9 Aug 2016 02:19:37 GMT",
"version": "v1"
}
] |
2016-08-10
|
[
[
"Singh",
"Krishna Kumar",
""
],
[
"Lee",
"Yong Jae",
""
]
] |
We propose an end-to-end deep convolutional network to simultaneously localize and rank relative visual attributes, given only weakly-supervised pairwise image comparisons. Unlike previous methods, our network jointly learns the attribute's features, localization, and ranker. The localization module of our network discovers the most informative image region for the attribute, which is then used by the ranking module to learn a ranking model of the attribute. Our end-to-end framework also significantly speeds up processing and is much faster than previous methods. We show state-of-the-art ranking results on various relative attribute datasets, and our qualitative localization results clearly demonstrate our network's ability to learn meaningful image patches.
|
2304.02146
|
Ignavier Ng
|
Ignavier Ng, Biwei Huang, Kun Zhang
|
Structure Learning with Continuous Optimization: A Sober Look and Beyond
| null | null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper investigates in which cases continuous optimization for directed
acyclic graph (DAG) structure learning can and cannot perform well and why this
happens, and suggests possible directions to make the search procedure more
reliable. Reisach et al. (2021) suggested that the remarkable performance of
several continuous structure learning approaches is primarily driven by a high
agreement between the order of increasing marginal variances and the
topological order, and demonstrated that these approaches do not perform well
after data standardization. We analyze this phenomenon for continuous
approaches assuming equal and non-equal noise variances, and show that the
statement may not hold in either case by providing counterexamples,
justifications, and possible alternative explanations. We further demonstrate
that nonconvexity may be a main concern especially for the non-equal noise
variances formulation, while recent advances in continuous structure learning
fail to achieve improvement in this case. Our findings suggest that future
works should take into account the non-equal noise variances formulation to
handle more general settings and for a more comprehensive empirical evaluation.
Lastly, we provide insights into other aspects of the search procedure,
including thresholding and sparsity, and show that they play an important role
in the final solutions.
|
[
{
"created": "Tue, 4 Apr 2023 22:10:40 GMT",
"version": "v1"
}
] |
2023-04-06
|
[
[
"Ng",
"Ignavier",
""
],
[
"Huang",
"Biwei",
""
],
[
"Zhang",
"Kun",
""
]
] |
This paper investigates in which cases continuous optimization for directed acyclic graph (DAG) structure learning can and cannot perform well and why this happens, and suggests possible directions to make the search procedure more reliable. Reisach et al. (2021) suggested that the remarkable performance of several continuous structure learning approaches is primarily driven by a high agreement between the order of increasing marginal variances and the topological order, and demonstrated that these approaches do not perform well after data standardization. We analyze this phenomenon for continuous approaches assuming equal and non-equal noise variances, and show that the statement may not hold in either case by providing counterexamples, justifications, and possible alternative explanations. We further demonstrate that nonconvexity may be a main concern especially for the non-equal noise variances formulation, while recent advances in continuous structure learning fail to achieve improvement in this case. Our findings suggest that future works should take into account the non-equal noise variances formulation to handle more general settings and for a more comprehensive empirical evaluation. Lastly, we provide insights into other aspects of the search procedure, including thresholding and sparsity, and show that they play an important role in the final solutions.
|
2203.03466
|
Greg Yang
|
Greg Yang, Edward J. Hu, Igor Babuschkin, Szymon Sidor, Xiaodong Liu,
David Farhi, Nick Ryder, Jakub Pachocki, Weizhu Chen, Jianfeng Gao
|
Tensor Programs V: Tuning Large Neural Networks via Zero-Shot
Hyperparameter Transfer
|
NeurIPS 2021
| null | null | null |
cs.LG cond-mat.dis-nn cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Hyperparameter (HP) tuning in deep learning is an expensive process,
prohibitively so for neural networks (NNs) with billions of parameters. We show
that, in the recently discovered Maximal Update Parametrization (muP), many
optimal HPs remain stable even as model size changes. This leads to a new HP
tuning paradigm we call muTransfer: parametrize the target model in muP, tune
the HP indirectly on a smaller model, and zero-shot transfer them to the
full-sized model, i.e., without directly tuning the latter at all. We verify
muTransfer on Transformer and ResNet. For example, 1) by transferring
pretraining HPs from a model of 13M parameters, we outperform published numbers
of BERT-large (350M parameters), with a total tuning cost equivalent to
pretraining BERT-large once; 2) by transferring from 40M parameters, we
outperform published numbers of the 6.7B GPT-3 model, with tuning cost only 7%
of total pretraining cost. A Pytorch implementation of our technique can be
found at github.com/microsoft/mup and installable via `pip install mup`.
|
[
{
"created": "Mon, 7 Mar 2022 15:37:35 GMT",
"version": "v1"
},
{
"created": "Mon, 28 Mar 2022 08:12:14 GMT",
"version": "v2"
}
] |
2022-03-29
|
[
[
"Yang",
"Greg",
""
],
[
"Hu",
"Edward J.",
""
],
[
"Babuschkin",
"Igor",
""
],
[
"Sidor",
"Szymon",
""
],
[
"Liu",
"Xiaodong",
""
],
[
"Farhi",
"David",
""
],
[
"Ryder",
"Nick",
""
],
[
"Pachocki",
"Jakub",
""
],
[
"Chen",
"Weizhu",
""
],
[
"Gao",
"Jianfeng",
""
]
] |
Hyperparameter (HP) tuning in deep learning is an expensive process, prohibitively so for neural networks (NNs) with billions of parameters. We show that, in the recently discovered Maximal Update Parametrization (muP), many optimal HPs remain stable even as model size changes. This leads to a new HP tuning paradigm we call muTransfer: parametrize the target model in muP, tune the HP indirectly on a smaller model, and zero-shot transfer them to the full-sized model, i.e., without directly tuning the latter at all. We verify muTransfer on Transformer and ResNet. For example, 1) by transferring pretraining HPs from a model of 13M parameters, we outperform published numbers of BERT-large (350M parameters), with a total tuning cost equivalent to pretraining BERT-large once; 2) by transferring from 40M parameters, we outperform published numbers of the 6.7B GPT-3 model, with tuning cost only 7% of total pretraining cost. A Pytorch implementation of our technique can be found at github.com/microsoft/mup and installable via `pip install mup`.
|
2304.00946
|
Xiang Wang
|
Xiang Wang, Shiwei Zhang, Zhiwu Qing, Changxin Gao, Yingya Zhang, Deli
Zhao, Nong Sang
|
MoLo: Motion-augmented Long-short Contrastive Learning for Few-shot
Action Recognition
|
Accepted by CVPR-2023. Code:
https://github.com/alibaba-mmai-research/MoLo
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Current state-of-the-art approaches for few-shot action recognition achieve
promising performance by conducting frame-level matching on learned visual
features. However, they generally suffer from two limitations: i) the matching
procedure between local frames tends to be inaccurate due to the lack of
guidance to force long-range temporal perception; ii) explicit motion learning
is usually ignored, leading to partial information loss. To address these
issues, we develop a Motion-augmented Long-short Contrastive Learning (MoLo)
method that contains two crucial components, including a long-short contrastive
objective and a motion autodecoder. Specifically, the long-short contrastive
objective is to endow local frame features with long-form temporal awareness by
maximizing their agreement with the global token of videos belonging to the
same class. The motion autodecoder is a lightweight architecture to reconstruct
pixel motions from the differential features, which explicitly embeds the
network with motion dynamics. By this means, MoLo can simultaneously learn
long-range temporal context and motion cues for comprehensive few-shot
matching. To demonstrate the effectiveness, we evaluate MoLo on five standard
benchmarks, and the results show that MoLo favorably outperforms recent
advanced methods. The source code is available at
https://github.com/alibaba-mmai-research/MoLo.
|
[
{
"created": "Mon, 3 Apr 2023 13:09:39 GMT",
"version": "v1"
}
] |
2023-04-04
|
[
[
"Wang",
"Xiang",
""
],
[
"Zhang",
"Shiwei",
""
],
[
"Qing",
"Zhiwu",
""
],
[
"Gao",
"Changxin",
""
],
[
"Zhang",
"Yingya",
""
],
[
"Zhao",
"Deli",
""
],
[
"Sang",
"Nong",
""
]
] |
Current state-of-the-art approaches for few-shot action recognition achieve promising performance by conducting frame-level matching on learned visual features. However, they generally suffer from two limitations: i) the matching procedure between local frames tends to be inaccurate due to the lack of guidance to force long-range temporal perception; ii) explicit motion learning is usually ignored, leading to partial information loss. To address these issues, we develop a Motion-augmented Long-short Contrastive Learning (MoLo) method that contains two crucial components, including a long-short contrastive objective and a motion autodecoder. Specifically, the long-short contrastive objective is to endow local frame features with long-form temporal awareness by maximizing their agreement with the global token of videos belonging to the same class. The motion autodecoder is a lightweight architecture to reconstruct pixel motions from the differential features, which explicitly embeds the network with motion dynamics. By this means, MoLo can simultaneously learn long-range temporal context and motion cues for comprehensive few-shot matching. To demonstrate the effectiveness, we evaluate MoLo on five standard benchmarks, and the results show that MoLo favorably outperforms recent advanced methods. The source code is available at https://github.com/alibaba-mmai-research/MoLo.
|
2205.11638
|
Ahmed Abbas
|
Ahmed Abbas, Paul Swoboda
|
DOGE-Train: Discrete Optimization on GPU with End-to-end Training
|
AAAI 2024. Alert before printing: pg. 16-20 only contain per instance
results, can possibly be skipped
| null | null | null |
cs.LG math.OC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a fast, scalable, data-driven approach for solving relaxations of
0-1 integer linear programs. We use a combination of graph neural networks
(GNN) and the Lagrange decomposition based algorithm FastDOG (Abbas and Swoboda
2022b). We make the latter differentiable for end-to-end training and use GNNs
to predict its algorithmic parameters. This allows to retain the algorithm's
theoretical properties including dual feasibility and guaranteed non-decrease
in the lower bound while improving it via training. We overcome suboptimal
fixed points of the basic solver by additional non-parametric GNN update steps
maintaining dual feasibility. For training we use an unsupervised loss. We
train on smaller problems and test on larger ones showing strong generalization
performance with a GNN comprising only around $10k$ parameters. Our solver
achieves significantly faster performance and better dual objectives than its
non-learned version, achieving close to optimal objective values of LP
relaxations of very large structured prediction problems and on selected
combinatorial ones. In particular, we achieve better objective values than
specialized approximate solvers for specific problem classes while retaining
their efficiency. Our solver has better any-time performance over a large time
period compared to a commercial solver. Code available at
https://github.com/LPMP/BDD
|
[
{
"created": "Mon, 23 May 2022 21:09:41 GMT",
"version": "v1"
},
{
"created": "Thu, 28 Dec 2023 20:55:19 GMT",
"version": "v2"
}
] |
2024-01-01
|
[
[
"Abbas",
"Ahmed",
""
],
[
"Swoboda",
"Paul",
""
]
] |
We present a fast, scalable, data-driven approach for solving relaxations of 0-1 integer linear programs. We use a combination of graph neural networks (GNN) and the Lagrange decomposition based algorithm FastDOG (Abbas and Swoboda 2022b). We make the latter differentiable for end-to-end training and use GNNs to predict its algorithmic parameters. This allows to retain the algorithm's theoretical properties including dual feasibility and guaranteed non-decrease in the lower bound while improving it via training. We overcome suboptimal fixed points of the basic solver by additional non-parametric GNN update steps maintaining dual feasibility. For training we use an unsupervised loss. We train on smaller problems and test on larger ones showing strong generalization performance with a GNN comprising only around $10k$ parameters. Our solver achieves significantly faster performance and better dual objectives than its non-learned version, achieving close to optimal objective values of LP relaxations of very large structured prediction problems and on selected combinatorial ones. In particular, we achieve better objective values than specialized approximate solvers for specific problem classes while retaining their efficiency. Our solver has better any-time performance over a large time period compared to a commercial solver. Code available at https://github.com/LPMP/BDD
|
cs/0010027
|
David Martinez
|
David Martinez and Eneko Agirre
|
One Sense per Collocation and Genre/Topic Variations
|
9 pages
|
Proceedings of the Joint SIGDAT Conference on Empirical Methods in
Natural Language Processing and Very Large Corpora 2000
| null | null |
cs.CL
| null |
This paper revisits the one sense per collocation hypothesis using
fine-grained sense distinctions and two different corpora. We show that the
hypothesis is weaker for fine-grained sense distinctions (70% vs. 99% reported
earlier on 2-way ambiguities). We also show that one sense per collocation does
hold across corpora, but that collocations vary from one corpus to the other,
following genre and topic variations. This explains the low results when
performing word sense disambiguation across corpora. In fact, we demonstrate
that when two independent corpora share a related genre/topic, the word sense
disambiguation results would be better. Future work on word sense
disambiguation will have to take into account genre and topic as important
parameters on their models.
|
[
{
"created": "Tue, 17 Oct 2000 10:26:33 GMT",
"version": "v1"
}
] |
2007-05-23
|
[
[
"Martinez",
"David",
""
],
[
"Agirre",
"Eneko",
""
]
] |
This paper revisits the one sense per collocation hypothesis using fine-grained sense distinctions and two different corpora. We show that the hypothesis is weaker for fine-grained sense distinctions (70% vs. 99% reported earlier on 2-way ambiguities). We also show that one sense per collocation does hold across corpora, but that collocations vary from one corpus to the other, following genre and topic variations. This explains the low results when performing word sense disambiguation across corpora. In fact, we demonstrate that when two independent corpora share a related genre/topic, the word sense disambiguation results would be better. Future work on word sense disambiguation will have to take into account genre and topic as important parameters on their models.
|
2004.14969
|
Baoxu Shi
|
Baoxu Shi, Shan Li, Jaewon Yang, Mustafa Emre Kazdagli, Qi He
|
Learning to Ask Screening Questions for Job Postings
|
10 pages, to appear in SIGIR2020
| null | null | null |
cs.IR cs.AI cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
At LinkedIn, we want to create economic opportunity for everyone in the
global workforce. A critical aspect of this goal is matching jobs with
qualified applicants. To improve hiring efficiency and reduce the need to
manually screening each applicant, we develop a new product where recruiters
can ask screening questions online so that they can filter qualified candidates
easily. To add screening questions to all $20$M active jobs at LinkedIn, we
propose a new task that aims to automatically generate screening questions for
a given job posting. To solve the task of generating screening questions, we
develop a two-stage deep learning model called Job2Questions, where we apply a
deep learning model to detect intent from the text description, and then rank
the detected intents by their importance based on other contextual features.
Since this is a new product with no historical data, we employ deep transfer
learning to train complex models with limited training data. We launched the
screening question product and our AI models to LinkedIn users and observed
significant impact in the job marketplace. During our online A/B test, we
observed $+53.10\%$ screening question suggestion acceptance rate, $+22.17\%$
job coverage, $+190\%$ recruiter-applicant interaction, and $+11$ Net Promoter
Score. In sum, the deployed Job2Questions model helps recruiters to find
qualified applicants and job seekers to find jobs they are qualified for.
|
[
{
"created": "Thu, 30 Apr 2020 17:18:17 GMT",
"version": "v1"
}
] |
2020-05-01
|
[
[
"Shi",
"Baoxu",
""
],
[
"Li",
"Shan",
""
],
[
"Yang",
"Jaewon",
""
],
[
"Kazdagli",
"Mustafa Emre",
""
],
[
"He",
"Qi",
""
]
] |
At LinkedIn, we want to create economic opportunity for everyone in the global workforce. A critical aspect of this goal is matching jobs with qualified applicants. To improve hiring efficiency and reduce the need to manually screening each applicant, we develop a new product where recruiters can ask screening questions online so that they can filter qualified candidates easily. To add screening questions to all $20$M active jobs at LinkedIn, we propose a new task that aims to automatically generate screening questions for a given job posting. To solve the task of generating screening questions, we develop a two-stage deep learning model called Job2Questions, where we apply a deep learning model to detect intent from the text description, and then rank the detected intents by their importance based on other contextual features. Since this is a new product with no historical data, we employ deep transfer learning to train complex models with limited training data. We launched the screening question product and our AI models to LinkedIn users and observed significant impact in the job marketplace. During our online A/B test, we observed $+53.10\%$ screening question suggestion acceptance rate, $+22.17\%$ job coverage, $+190\%$ recruiter-applicant interaction, and $+11$ Net Promoter Score. In sum, the deployed Job2Questions model helps recruiters to find qualified applicants and job seekers to find jobs they are qualified for.
|
0711.3325
|
EDA Publishing Association
|
Hsiharng Yang, Chung-Tze Lee
|
Miniaturized Fluorescence Excitation Platform with Optical Fiber for
Bio-Detection Chips
|
Submitted on behalf of TIMA Editions
(http://irevues.inist.fr/tima-editions)
|
Dans Symposium on Design, Test, Integration and Packaging of
MEMS/MOEMS - DTIP 2006, Stresa, Lago Maggiore : Italie (2006)
| null | null |
cs.OH
| null |
This paper presents a new research study on the platform fabrication of
fluorescence bio-detection chip with an optical fiber transmission. Anisotropic
wet etching on (100) silicon wafers to fabrication V-groove for optical fiber
alignment and micro-mirror were included. Combing with anodic bonding technique
to adhere glass, silicon structure and optical fiber for a fluorescence
excitation platform was completed. In this study, the etching solution 40% KOH
was used to study the parameters effect. The results show that working
temperature is the main parameter to significantly effect the etch rate. The
anisotropic etching resulted 54.7 degrees reflective mirrors and its
reflectivity for optical beam were also examined. The surface roughness of the
micro-mirror is Ra 4.1 nm measured using AFM, it provides excellent optical
reflection. The incident light and beam profiles were also examined for further
study. This study can show this micro-platform adaptable for fluorescence
bio-detection.
|
[
{
"created": "Wed, 21 Nov 2007 10:09:35 GMT",
"version": "v1"
}
] |
2007-11-29
|
[
[
"Yang",
"Hsiharng",
""
],
[
"Lee",
"Chung-Tze",
""
]
] |
This paper presents a new research study on the platform fabrication of fluorescence bio-detection chip with an optical fiber transmission. Anisotropic wet etching on (100) silicon wafers to fabrication V-groove for optical fiber alignment and micro-mirror were included. Combing with anodic bonding technique to adhere glass, silicon structure and optical fiber for a fluorescence excitation platform was completed. In this study, the etching solution 40% KOH was used to study the parameters effect. The results show that working temperature is the main parameter to significantly effect the etch rate. The anisotropic etching resulted 54.7 degrees reflective mirrors and its reflectivity for optical beam were also examined. The surface roughness of the micro-mirror is Ra 4.1 nm measured using AFM, it provides excellent optical reflection. The incident light and beam profiles were also examined for further study. This study can show this micro-platform adaptable for fluorescence bio-detection.
|
1804.00175
|
Yi Li
|
Yi Li, Gu Wang, Xiangyang Ji, Yu Xiang, Dieter Fox
|
DeepIM: Deep Iterative Matching for 6D Pose Estimation
|
submitted to IJCV, update results on YCB_Video, add depth-based
results
| null |
10.1007/s11263-019-01250-9
| null |
cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Estimating the 6D pose of objects from images is an important problem in
various applications such as robot manipulation and virtual reality. While
direct regression of images to object poses has limited accuracy, matching
rendered images of an object against the observed image can produce accurate
results. In this work, we propose a novel deep neural network for 6D pose
matching named DeepIM. Given an initial pose estimation, our network is able to
iteratively refine the pose by matching the rendered image against the observed
image. The network is trained to predict a relative pose transformation using
an untangled representation of 3D location and 3D orientation and an iterative
training process. Experiments on two commonly used benchmarks for 6D pose
estimation demonstrate that DeepIM achieves large improvements over
state-of-the-art methods. We furthermore show that DeepIM is able to match
previously unseen objects.
|
[
{
"created": "Sat, 31 Mar 2018 14:02:25 GMT",
"version": "v1"
},
{
"created": "Wed, 25 Apr 2018 16:28:50 GMT",
"version": "v2"
},
{
"created": "Thu, 14 Mar 2019 13:25:49 GMT",
"version": "v3"
},
{
"created": "Wed, 2 Oct 2019 00:54:47 GMT",
"version": "v4"
}
] |
2019-10-03
|
[
[
"Li",
"Yi",
""
],
[
"Wang",
"Gu",
""
],
[
"Ji",
"Xiangyang",
""
],
[
"Xiang",
"Yu",
""
],
[
"Fox",
"Dieter",
""
]
] |
Estimating the 6D pose of objects from images is an important problem in various applications such as robot manipulation and virtual reality. While direct regression of images to object poses has limited accuracy, matching rendered images of an object against the observed image can produce accurate results. In this work, we propose a novel deep neural network for 6D pose matching named DeepIM. Given an initial pose estimation, our network is able to iteratively refine the pose by matching the rendered image against the observed image. The network is trained to predict a relative pose transformation using an untangled representation of 3D location and 3D orientation and an iterative training process. Experiments on two commonly used benchmarks for 6D pose estimation demonstrate that DeepIM achieves large improvements over state-of-the-art methods. We furthermore show that DeepIM is able to match previously unseen objects.
|
2006.02610
|
Xulei Yang
|
Balagopal Unnikrishnan, Pranshu Ranjan Singh, Xulei Yang, and Matthew
Chin Heng Chua
|
Semi-supervised and Unsupervised Methods for Heart Sounds Classification
in Restricted Data Environments
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Automated heart sounds classification is a much-required diagnostic tool in
the view of increasing incidences of heart related diseases worldwide. In this
study, we conduct a comprehensive study of heart sounds classification by using
various supervised, semi-supervised and unsupervised approaches on the
PhysioNet/CinC 2016 Challenge dataset. Supervised approaches, including deep
learning and machine learning methods, require large amounts of labelled data
to train the models, which are challenging to obtain in most practical
scenarios. In view of the need to reduce the labelling burden for clinical
practices, where human labelling is both expensive and time-consuming,
semi-supervised or even unsupervised approaches in restricted data setting are
desirable. A GAN based semi-supervised method is therefore proposed, which
allows the usage of unlabelled data samples to boost the learning of data
distribution. It achieves a better performance in terms of AUROC over the
supervised baseline when limited data samples exist. Furthermore, several
unsupervised methods are explored as an alternative approach by considering the
given problem as an anomaly detection scenario. In particular, the unsupervised
feature extraction using 1D CNN Autoencoder coupled with one-class SVM obtains
good performance without any data labelling. The potential of the proposed
semi-supervised and unsupervised methods may lead to a workflow tool in the
future for the creation of higher quality datasets.
|
[
{
"created": "Thu, 4 Jun 2020 02:07:35 GMT",
"version": "v1"
}
] |
2020-06-05
|
[
[
"Unnikrishnan",
"Balagopal",
""
],
[
"Singh",
"Pranshu Ranjan",
""
],
[
"Yang",
"Xulei",
""
],
[
"Chua",
"Matthew Chin Heng",
""
]
] |
Automated heart sounds classification is a much-required diagnostic tool in the view of increasing incidences of heart related diseases worldwide. In this study, we conduct a comprehensive study of heart sounds classification by using various supervised, semi-supervised and unsupervised approaches on the PhysioNet/CinC 2016 Challenge dataset. Supervised approaches, including deep learning and machine learning methods, require large amounts of labelled data to train the models, which are challenging to obtain in most practical scenarios. In view of the need to reduce the labelling burden for clinical practices, where human labelling is both expensive and time-consuming, semi-supervised or even unsupervised approaches in restricted data setting are desirable. A GAN based semi-supervised method is therefore proposed, which allows the usage of unlabelled data samples to boost the learning of data distribution. It achieves a better performance in terms of AUROC over the supervised baseline when limited data samples exist. Furthermore, several unsupervised methods are explored as an alternative approach by considering the given problem as an anomaly detection scenario. In particular, the unsupervised feature extraction using 1D CNN Autoencoder coupled with one-class SVM obtains good performance without any data labelling. The potential of the proposed semi-supervised and unsupervised methods may lead to a workflow tool in the future for the creation of higher quality datasets.
|
1109.0660
|
Albert Fannjiang
|
Albert Fannjiang and Wenjing Liao
|
Mismatch and resolution in compressive imaging
|
Figure 5 revised
| null |
10.1117/12.892434
| null |
cs.IT math.IT math.NA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Highly coherent sensing matrices arise in discretization of continuum
problems such as radar and medical imaging when the grid spacing is below the
Rayleigh threshold as well as in using highly coherent, redundant dictionaries
as sparsifying operators. Algorithms (BOMP, BLOOMP) based on techniques of band
exclusion and local optimization are proposed to enhance Orthogonal Matching
Pursuit (OMP) and deal with such coherent sensing matrices. BOMP and BLOOMP
have provably performance guarantee of reconstructing sparse, widely separated
objects {\em independent} of the redundancy and have a sparsity constraint and
computational cost similar to OMP's. Numerical study demonstrates the
effectiveness of BLOOMP for compressed sensing with highly coherent, redundant
sensing matrices.
|
[
{
"created": "Sat, 3 Sep 2011 23:58:02 GMT",
"version": "v1"
},
{
"created": "Fri, 16 Sep 2011 22:23:52 GMT",
"version": "v2"
}
] |
2015-05-30
|
[
[
"Fannjiang",
"Albert",
""
],
[
"Liao",
"Wenjing",
""
]
] |
Highly coherent sensing matrices arise in discretization of continuum problems such as radar and medical imaging when the grid spacing is below the Rayleigh threshold as well as in using highly coherent, redundant dictionaries as sparsifying operators. Algorithms (BOMP, BLOOMP) based on techniques of band exclusion and local optimization are proposed to enhance Orthogonal Matching Pursuit (OMP) and deal with such coherent sensing matrices. BOMP and BLOOMP have provably performance guarantee of reconstructing sparse, widely separated objects {\em independent} of the redundancy and have a sparsity constraint and computational cost similar to OMP's. Numerical study demonstrates the effectiveness of BLOOMP for compressed sensing with highly coherent, redundant sensing matrices.
|
1905.10833
|
Michal Dory
|
Michal Dory, Mohsen Ghaffari
|
Improved Distributed Approximations for Minimum-Weight
Two-Edge-Connected Spanning Subgraph
| null | null | null | null |
cs.DS cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The minimum-weight $2$-edge-connected spanning subgraph (2-ECSS) problem is a
natural generalization of the well-studied minimum-weight spanning tree (MST)
problem, and it has received considerable attention in the area of network
design. The latter problem asks for a minimum-weight subgraph with an edge
connectivity of $1$ between each pair of vertices while the former strengthens
this edge-connectivity requirement to $2$. Despite this resemblance, the 2-ECSS
problem is considerably more complex than MST. While MST admits a linear-time
centralized exact algorithm, 2-ECSS is NP-hard and the best known centralized
approximation algorithm for it (that runs in polynomial time) gives a
$2$-approximation.
In this paper, we give a deterministic distributed algorithm with round
complexity of $\widetilde{O}(D+\sqrt{n})$ that computes a
$(5+\epsilon)$-approximation of 2-ECSS, for any constant $\epsilon>0$. Up to
logarithmic factors, this complexity matches the
$\widetilde{\Omega}(D+\sqrt{n})$ lower bound that can be derived from Das Sarma
et al. [STOC'11], as shown by Censor-Hillel and Dory [OPODIS'17]. Our result is
the first distributed constant approximation for 2-ECSS in the nearly optimal
time and it improves on a recent randomized algorithm of Dory [PODC'18], which
achieved an $O(\log n)$-approximation in $\widetilde{O}(D+\sqrt{n})$ rounds.
We also present an alternative algorithm for $O(\log n)$-approximation, whose
round complexity is linear in the low-congestion shortcut parameter of the
network, following a framework introduced by Ghaffari and Haeupler [SODA'16].
This algorithm has round complexity $\widetilde{O}(D+\sqrt{n})$ in worst-case
networks but it provably runs much faster in many well-behaved graph families
of interest. For instance, it runs in $\widetilde{O}(D)$ time in planar
networks and those with bounded genus, bounded path-width or bounded
tree-width.
|
[
{
"created": "Sun, 26 May 2019 16:37:22 GMT",
"version": "v1"
},
{
"created": "Mon, 3 Jun 2019 11:49:00 GMT",
"version": "v2"
}
] |
2019-06-04
|
[
[
"Dory",
"Michal",
""
],
[
"Ghaffari",
"Mohsen",
""
]
] |
The minimum-weight $2$-edge-connected spanning subgraph (2-ECSS) problem is a natural generalization of the well-studied minimum-weight spanning tree (MST) problem, and it has received considerable attention in the area of network design. The latter problem asks for a minimum-weight subgraph with an edge connectivity of $1$ between each pair of vertices while the former strengthens this edge-connectivity requirement to $2$. Despite this resemblance, the 2-ECSS problem is considerably more complex than MST. While MST admits a linear-time centralized exact algorithm, 2-ECSS is NP-hard and the best known centralized approximation algorithm for it (that runs in polynomial time) gives a $2$-approximation. In this paper, we give a deterministic distributed algorithm with round complexity of $\widetilde{O}(D+\sqrt{n})$ that computes a $(5+\epsilon)$-approximation of 2-ECSS, for any constant $\epsilon>0$. Up to logarithmic factors, this complexity matches the $\widetilde{\Omega}(D+\sqrt{n})$ lower bound that can be derived from Das Sarma et al. [STOC'11], as shown by Censor-Hillel and Dory [OPODIS'17]. Our result is the first distributed constant approximation for 2-ECSS in the nearly optimal time and it improves on a recent randomized algorithm of Dory [PODC'18], which achieved an $O(\log n)$-approximation in $\widetilde{O}(D+\sqrt{n})$ rounds. We also present an alternative algorithm for $O(\log n)$-approximation, whose round complexity is linear in the low-congestion shortcut parameter of the network, following a framework introduced by Ghaffari and Haeupler [SODA'16]. This algorithm has round complexity $\widetilde{O}(D+\sqrt{n})$ in worst-case networks but it provably runs much faster in many well-behaved graph families of interest. For instance, it runs in $\widetilde{O}(D)$ time in planar networks and those with bounded genus, bounded path-width or bounded tree-width.
|
2308.06201
|
Mohammad Eslami
|
Mohammad Eslami, Tiago Perez and Samuel Pagliarini
|
SALSy: Security-Aware Layout Synthesis
| null | null | null | null |
cs.CR cs.AR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Integrated Circuits (ICs) are the target of diverse attacks during their
lifetime. Fabrication-time attacks, such as the insertion of Hardware Trojans,
can give an adversary access to privileged data and/or the means to corrupt the
IC's internal computation. Post-fabrication attacks, where the end-user takes a
malicious role, also attempt to obtain privileged information through means
such as fault injection and probing. Taking these threats into account and at
the same time, this paper proposes a methodology for Security-Aware Layout
Synthesis (SALSy), such that ICs can be designed with security in mind in the
same manner as power-performance-area (PPA) metrics are considered today, a
concept known as security closure. Furthermore, the trade-offs between PPA and
security are considered and a chip is fabricated in a 65nm CMOS commercial
technology for validation purposes - a feature not seen in previous research on
security closure. Measurements on the fabricated ICs indicate that SALSy
promotes a modest increase in power in order to achieve significantly improved
security metrics.
|
[
{
"created": "Fri, 11 Aug 2023 15:52:28 GMT",
"version": "v1"
},
{
"created": "Mon, 21 Aug 2023 14:15:02 GMT",
"version": "v2"
}
] |
2023-08-22
|
[
[
"Eslami",
"Mohammad",
""
],
[
"Perez",
"Tiago",
""
],
[
"Pagliarini",
"Samuel",
""
]
] |
Integrated Circuits (ICs) are the target of diverse attacks during their lifetime. Fabrication-time attacks, such as the insertion of Hardware Trojans, can give an adversary access to privileged data and/or the means to corrupt the IC's internal computation. Post-fabrication attacks, where the end-user takes a malicious role, also attempt to obtain privileged information through means such as fault injection and probing. Taking these threats into account and at the same time, this paper proposes a methodology for Security-Aware Layout Synthesis (SALSy), such that ICs can be designed with security in mind in the same manner as power-performance-area (PPA) metrics are considered today, a concept known as security closure. Furthermore, the trade-offs between PPA and security are considered and a chip is fabricated in a 65nm CMOS commercial technology for validation purposes - a feature not seen in previous research on security closure. Measurements on the fabricated ICs indicate that SALSy promotes a modest increase in power in order to achieve significantly improved security metrics.
|
2309.06223
|
Yanzuo Chen
|
Yanzuo Chen (1), Zhibo Liu (1), Yuanyuan Yuan (1), Sihang Hu (2),
Tianxiang Li (2), Shuai Wang (1) ((1) The Hong Kong University of Science and
Technology, (2) Huawei Technologies)
|
Unveiling Single-Bit-Flip Attacks on DNN Executables
|
Fix typo
| null | null | null |
cs.CR cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Recent research has shown that bit-flip attacks (BFAs) can manipulate deep
neural networks (DNNs) via DRAM Rowhammer exploitations. Existing attacks are
primarily launched over high-level DNN frameworks like PyTorch and flip bits in
model weight files. Nevertheless, DNNs are frequently compiled into low-level
executables by deep learning (DL) compilers to fully leverage low-level
hardware primitives. The compiled code is usually high-speed and manifests
dramatically distinct execution paradigms from high-level DNN frameworks.
In this paper, we launch the first systematic study on the attack surface of
BFA specifically for DNN executables compiled by DL compilers. We design an
automated search tool to identify vulnerable bits in DNN executables and
identify practical attack vectors that exploit the model structure in DNN
executables with BFAs (whereas prior works make likely strong assumptions to
attack model weights). DNN executables appear more "opaque" than models in
high-level DNN frameworks. Nevertheless, we find that DNN executables contain
extensive, severe (e.g., single-bit flip), and transferrable attack surfaces
that are not present in high-level DNN models and can be exploited to deplete
full model intelligence and control output labels. Our finding calls for
incorporating security mechanisms in future DNN compilation toolchains.
|
[
{
"created": "Tue, 12 Sep 2023 13:42:20 GMT",
"version": "v1"
},
{
"created": "Sun, 8 Oct 2023 04:33:54 GMT",
"version": "v2"
}
] |
2023-10-10
|
[
[
"Chen",
"Yanzuo",
""
],
[
"Liu",
"Zhibo",
""
],
[
"Yuan",
"Yuanyuan",
""
],
[
"Hu",
"Sihang",
""
],
[
"Li",
"Tianxiang",
""
],
[
"Wang",
"Shuai",
""
]
] |
Recent research has shown that bit-flip attacks (BFAs) can manipulate deep neural networks (DNNs) via DRAM Rowhammer exploitations. Existing attacks are primarily launched over high-level DNN frameworks like PyTorch and flip bits in model weight files. Nevertheless, DNNs are frequently compiled into low-level executables by deep learning (DL) compilers to fully leverage low-level hardware primitives. The compiled code is usually high-speed and manifests dramatically distinct execution paradigms from high-level DNN frameworks. In this paper, we launch the first systematic study on the attack surface of BFA specifically for DNN executables compiled by DL compilers. We design an automated search tool to identify vulnerable bits in DNN executables and identify practical attack vectors that exploit the model structure in DNN executables with BFAs (whereas prior works make likely strong assumptions to attack model weights). DNN executables appear more "opaque" than models in high-level DNN frameworks. Nevertheless, we find that DNN executables contain extensive, severe (e.g., single-bit flip), and transferrable attack surfaces that are not present in high-level DNN models and can be exploited to deplete full model intelligence and control output labels. Our finding calls for incorporating security mechanisms in future DNN compilation toolchains.
|
2404.14610
|
Christian Vogler
|
Paige DeVries, Nina Tran, Keith Delk, Melanie Miga, Richard Taulbee,
Pranav Pidathala, Abraham Glasser, Raja Kushlanagar and Christian Vogler
|
Sign Language-Based versus Touch-Based Input for Deaf Users with
Interactive Personal Assistants in Simulated Kitchen Environments
|
To appear in Extended Abstracts of the CHI Conference on Human
Factors in Computing Systems, CHI EA 2024, May 11-16, 2024, Honolulu, HI,
USA. ACM, New York, NY, USA, 9 pages. https://doi.org/10.1145/3613905.3651075
| null |
10.1145/3613905.3651075
| null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this study, we assess the usability of interactive personal assistants
(IPAs), such as Amazon Alexa, in a simulated kitchen smart home environment,
with deaf and hard of hearing users. Participants engage in activities in a way
that causes their hands to get dirty. With these dirty hands, they are tasked
with two different input methods for IPAs: American Sign Language (ASL) in a
Wizard-of-Oz design, and smart home apps with a touchscreen. Usability ratings
show that participants significantly preferred ASL over touch-based apps with
dirty hands, although not to a larger extent than in comparable previous work
with clean hands. Participants also expressed significant enthusiasm for
ASL-based IPA interaction in Netpromoter scores and in questions about their
overall preferences. Preliminary observations further suggest that having dirty
hands may affect the way people sign, which may pose challenges for building
IPAs that natively support sign language input.
|
[
{
"created": "Mon, 22 Apr 2024 22:17:35 GMT",
"version": "v1"
}
] |
2024-05-16
|
[
[
"DeVries",
"Paige",
""
],
[
"Tran",
"Nina",
""
],
[
"Delk",
"Keith",
""
],
[
"Miga",
"Melanie",
""
],
[
"Taulbee",
"Richard",
""
],
[
"Pidathala",
"Pranav",
""
],
[
"Glasser",
"Abraham",
""
],
[
"Kushlanagar",
"Raja",
""
],
[
"Vogler",
"Christian",
""
]
] |
In this study, we assess the usability of interactive personal assistants (IPAs), such as Amazon Alexa, in a simulated kitchen smart home environment, with deaf and hard of hearing users. Participants engage in activities in a way that causes their hands to get dirty. With these dirty hands, they are tasked with two different input methods for IPAs: American Sign Language (ASL) in a Wizard-of-Oz design, and smart home apps with a touchscreen. Usability ratings show that participants significantly preferred ASL over touch-based apps with dirty hands, although not to a larger extent than in comparable previous work with clean hands. Participants also expressed significant enthusiasm for ASL-based IPA interaction in Netpromoter scores and in questions about their overall preferences. Preliminary observations further suggest that having dirty hands may affect the way people sign, which may pose challenges for building IPAs that natively support sign language input.
|
0811.0273
|
Vinay Joseph
|
Vinod Sharma, Utpal Mukherji and Vinay Joseph
|
Efficient Energy Management Policies for Networks with Energy Harvesting
Sensor Nodes
|
Keywords: Optimal energy management policies, energy harvesting,
sensor networks, MAC protocols
| null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study sensor networks with energy harvesting nodes. The generated energy
at a node can be stored in a buffer. A sensor node periodically senses a random
field and generates a packet. These packets are stored in a queue and
transmitted using the energy available at that time at the node. For such
networks we develop efficient energy management policies. First, for a single
node, we obtain policies that are throughput optimal, i.e., the data queue
stays stable for the largest possible data rate. Next we obtain energy
management policies which minimize the mean delay in the queue. We also compare
performance of several easily implementable suboptimal policies. A greedy
policy is identified which, in low SNR regime, is throughput optimal and also
minimizes mean delay. Next using the results for a single node, we develop
efficient MAC policies.
|
[
{
"created": "Mon, 3 Nov 2008 12:05:01 GMT",
"version": "v1"
}
] |
2008-11-04
|
[
[
"Sharma",
"Vinod",
""
],
[
"Mukherji",
"Utpal",
""
],
[
"Joseph",
"Vinay",
""
]
] |
We study sensor networks with energy harvesting nodes. The generated energy at a node can be stored in a buffer. A sensor node periodically senses a random field and generates a packet. These packets are stored in a queue and transmitted using the energy available at that time at the node. For such networks we develop efficient energy management policies. First, for a single node, we obtain policies that are throughput optimal, i.e., the data queue stays stable for the largest possible data rate. Next we obtain energy management policies which minimize the mean delay in the queue. We also compare performance of several easily implementable suboptimal policies. A greedy policy is identified which, in low SNR regime, is throughput optimal and also minimizes mean delay. Next using the results for a single node, we develop efficient MAC policies.
|
1801.08873
|
Alexander Thomasian
|
Alexander Thomasian
|
Mirrored and Hybrid Disk Arrays: Organization, Scheduling, Reliability,
and Performance
| null | null | null | null |
cs.DC cs.OS cs.PF
|
http://creativecommons.org/licenses/by/4.0/
|
Basic mirroring (BM) classified as RAID level 1 replicates data on two disks,
thus doubling disk access bandwidth for read requests. RAID1/0 is an array of
BM pairs with balanced loads due to striping. When a disk fails the read load
on its pair is doubled, which results in halving the maximum attainable
bandwidth. We review RAID1 organizations which attain a balanced load upon disk
failure, but as shown by reliability analysis tend to be less reliable than
RAID1/0. Hybrid disk arrays which store XORed instead of replicated data tend
to have a higher reliability than mirrored disks, but incur a higher overhead
in updating data. Read request response time can be improved by processing them
at a higher priority than writes, since they have a direct effect on
application response time. Shortest seek distance and affinity based routing
both shorten seek time. Anticipatory arm placement places arms optimally to
minimize the seek distance. The analysis of RAID1 in normal, degraded, and
rebuild mode is provided to quantify RAID1/0 performance. We compare the
reliability of mirrored disk organizations against each other and hybrid disks
and erasure coded disk arrays.
|
[
{
"created": "Fri, 26 Jan 2018 15:59:51 GMT",
"version": "v1"
}
] |
2018-01-29
|
[
[
"Thomasian",
"Alexander",
""
]
] |
Basic mirroring (BM) classified as RAID level 1 replicates data on two disks, thus doubling disk access bandwidth for read requests. RAID1/0 is an array of BM pairs with balanced loads due to striping. When a disk fails the read load on its pair is doubled, which results in halving the maximum attainable bandwidth. We review RAID1 organizations which attain a balanced load upon disk failure, but as shown by reliability analysis tend to be less reliable than RAID1/0. Hybrid disk arrays which store XORed instead of replicated data tend to have a higher reliability than mirrored disks, but incur a higher overhead in updating data. Read request response time can be improved by processing them at a higher priority than writes, since they have a direct effect on application response time. Shortest seek distance and affinity based routing both shorten seek time. Anticipatory arm placement places arms optimally to minimize the seek distance. The analysis of RAID1 in normal, degraded, and rebuild mode is provided to quantify RAID1/0 performance. We compare the reliability of mirrored disk organizations against each other and hybrid disks and erasure coded disk arrays.
|
1304.5438
|
EPTCS
|
Thomas Brihaye (University of Mons), Quentin Menet (University of
Mons)
|
Simple strategies for Banach-Mazur games and fairly correct systems
|
In Proceedings GandALF 2013, arXiv:1307.4162
|
EPTCS 119, 2013, pp. 21-34
|
10.4204/EPTCS.119.5
| null |
cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In 2006, Varacca and V\"olzer proved that on finite graphs, omega-regular
large sets coincide with omega-regular sets of probability 1, by using the
existence of positional strategies in the related Banach-Mazur games. Motivated
by this result, we try to understand relations between sets of probability 1
and various notions of simple strategies (including those introduced in a
recent paper of Gr\"adel and Lessenich). Then, we introduce a generalisation of
the classical Banach-Mazur game and in particular, a probabilistic version
whose goal is to characterise sets of probability 1 (as classical Banach-Mazur
games characterise large sets). We obtain a determinacy result for these games,
when the winning set is a countable intersection of open sets.
|
[
{
"created": "Fri, 19 Apr 2013 14:53:04 GMT",
"version": "v1"
},
{
"created": "Wed, 17 Jul 2013 04:12:09 GMT",
"version": "v2"
}
] |
2013-07-18
|
[
[
"Brihaye",
"Thomas",
"",
"University of Mons"
],
[
"Menet",
"Quentin",
"",
"University of\n Mons"
]
] |
In 2006, Varacca and V\"olzer proved that on finite graphs, omega-regular large sets coincide with omega-regular sets of probability 1, by using the existence of positional strategies in the related Banach-Mazur games. Motivated by this result, we try to understand relations between sets of probability 1 and various notions of simple strategies (including those introduced in a recent paper of Gr\"adel and Lessenich). Then, we introduce a generalisation of the classical Banach-Mazur game and in particular, a probabilistic version whose goal is to characterise sets of probability 1 (as classical Banach-Mazur games characterise large sets). We obtain a determinacy result for these games, when the winning set is a countable intersection of open sets.
|
2207.05736
|
Kai-En Lin
|
Kai-En Lin, Lin Yen-Chen, Wei-Sheng Lai, Tsung-Yi Lin, Yi-Chang Shih,
Ravi Ramamoorthi
|
Vision Transformer for NeRF-Based View Synthesis from a Single Input
Image
|
WACV 2023 Project website:
https://cseweb.ucsd.edu/~viscomp/projects/VisionNeRF/
| null | null | null |
cs.CV cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Although neural radiance fields (NeRF) have shown impressive advances for
novel view synthesis, most methods typically require multiple input images of
the same scene with accurate camera poses. In this work, we seek to
substantially reduce the inputs to a single unposed image. Existing approaches
condition on local image features to reconstruct a 3D object, but often render
blurry predictions at viewpoints that are far away from the source view. To
address this issue, we propose to leverage both the global and local features
to form an expressive 3D representation. The global features are learned from a
vision transformer, while the local features are extracted from a 2D
convolutional network. To synthesize a novel view, we train a multilayer
perceptron (MLP) network conditioned on the learned 3D representation to
perform volume rendering. This novel 3D representation allows the network to
reconstruct unseen regions without enforcing constraints like symmetry or
canonical coordinate systems. Our method can render novel views from only a
single input image and generalize across multiple object categories using a
single model. Quantitative and qualitative evaluations demonstrate that the
proposed method achieves state-of-the-art performance and renders richer
details than existing approaches.
|
[
{
"created": "Tue, 12 Jul 2022 17:52:04 GMT",
"version": "v1"
},
{
"created": "Thu, 13 Oct 2022 20:09:59 GMT",
"version": "v2"
}
] |
2022-10-17
|
[
[
"Lin",
"Kai-En",
""
],
[
"Yen-Chen",
"Lin",
""
],
[
"Lai",
"Wei-Sheng",
""
],
[
"Lin",
"Tsung-Yi",
""
],
[
"Shih",
"Yi-Chang",
""
],
[
"Ramamoorthi",
"Ravi",
""
]
] |
Although neural radiance fields (NeRF) have shown impressive advances for novel view synthesis, most methods typically require multiple input images of the same scene with accurate camera poses. In this work, we seek to substantially reduce the inputs to a single unposed image. Existing approaches condition on local image features to reconstruct a 3D object, but often render blurry predictions at viewpoints that are far away from the source view. To address this issue, we propose to leverage both the global and local features to form an expressive 3D representation. The global features are learned from a vision transformer, while the local features are extracted from a 2D convolutional network. To synthesize a novel view, we train a multilayer perceptron (MLP) network conditioned on the learned 3D representation to perform volume rendering. This novel 3D representation allows the network to reconstruct unseen regions without enforcing constraints like symmetry or canonical coordinate systems. Our method can render novel views from only a single input image and generalize across multiple object categories using a single model. Quantitative and qualitative evaluations demonstrate that the proposed method achieves state-of-the-art performance and renders richer details than existing approaches.
|
2110.13953
|
Mayank Agarwal
|
Mayank Agarwal, Mikhail Yurochkin, Yuekai Sun
|
On sensitivity of meta-learning to support data
|
Accepted at NeurIPS 2021
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Meta-learning algorithms are widely used for few-shot learning. For example,
image recognition systems that readily adapt to unseen classes after seeing
only a few labeled examples. Despite their success, we show that modern
meta-learning algorithms are extremely sensitive to the data used for
adaptation, i.e. support data. In particular, we demonstrate the existence of
(unaltered, in-distribution, natural) images that, when used for adaptation,
yield accuracy as low as 4\% or as high as 95\% on standard few-shot image
classification benchmarks. We explain our empirical findings in terms of class
margins, which in turn suggests that robust and safe meta-learning requires
larger margins than supervised learning.
|
[
{
"created": "Tue, 26 Oct 2021 18:36:37 GMT",
"version": "v1"
}
] |
2021-10-28
|
[
[
"Agarwal",
"Mayank",
""
],
[
"Yurochkin",
"Mikhail",
""
],
[
"Sun",
"Yuekai",
""
]
] |
Meta-learning algorithms are widely used for few-shot learning. For example, image recognition systems that readily adapt to unseen classes after seeing only a few labeled examples. Despite their success, we show that modern meta-learning algorithms are extremely sensitive to the data used for adaptation, i.e. support data. In particular, we demonstrate the existence of (unaltered, in-distribution, natural) images that, when used for adaptation, yield accuracy as low as 4\% or as high as 95\% on standard few-shot image classification benchmarks. We explain our empirical findings in terms of class margins, which in turn suggests that robust and safe meta-learning requires larger margins than supervised learning.
|
2102.10684
|
Ahmed Abdelali
|
Ahmed Abdelali, Sabit Hassan, Hamdy Mubarak, Kareem Darwish and Younes
Samih
|
Pre-Training BERT on Arabic Tweets: Practical Considerations
|
6 pages, 5 figures
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Pretraining Bidirectional Encoder Representations from Transformers (BERT)
for downstream NLP tasks is a non-trival task. We pretrained 5 BERT models that
differ in the size of their training sets, mixture of formal and informal
Arabic, and linguistic preprocessing. All are intended to support Arabic
dialects and social media. The experiments highlight the centrality of data
diversity and the efficacy of linguistically aware segmentation. They also
highlight that more data or more training step do not necessitate better
models. Our new models achieve new state-of-the-art results on several
downstream tasks. The resulting models are released to the community under the
name QARiB.
|
[
{
"created": "Sun, 21 Feb 2021 20:51:33 GMT",
"version": "v1"
}
] |
2021-02-23
|
[
[
"Abdelali",
"Ahmed",
""
],
[
"Hassan",
"Sabit",
""
],
[
"Mubarak",
"Hamdy",
""
],
[
"Darwish",
"Kareem",
""
],
[
"Samih",
"Younes",
""
]
] |
Pretraining Bidirectional Encoder Representations from Transformers (BERT) for downstream NLP tasks is a non-trival task. We pretrained 5 BERT models that differ in the size of their training sets, mixture of formal and informal Arabic, and linguistic preprocessing. All are intended to support Arabic dialects and social media. The experiments highlight the centrality of data diversity and the efficacy of linguistically aware segmentation. They also highlight that more data or more training step do not necessitate better models. Our new models achieve new state-of-the-art results on several downstream tasks. The resulting models are released to the community under the name QARiB.
|
2112.07831
|
Varsha Lohani
|
Varsha Lohani, Anjali Sharma, Yatindra Nath Singh
|
Optimal Slot Size under Various Bandwidth Distributions in the
Flexible-grid Optical Networks
| null | null | null | null |
cs.NI
|
http://creativecommons.org/licenses/by/4.0/
|
Flexible grid Optical Networks are efficient mechanism to provide flexibility
in the optical spectrum utilization. For such networks, the slot width size as
specified by the ITU-T G.694.1 is 12.5 GHz. However, one should question if it
is the optimal grid size? In this paper, under different bandwidth distribution
scenarios, we review which slot size give appropriate spectrum efficiency.
Moreover, we present a study of the slot sizes with varying incoming traffic
having some bandwidth requirement under different scenarios.
|
[
{
"created": "Wed, 15 Dec 2021 01:59:35 GMT",
"version": "v1"
}
] |
2021-12-16
|
[
[
"Lohani",
"Varsha",
""
],
[
"Sharma",
"Anjali",
""
],
[
"Singh",
"Yatindra Nath",
""
]
] |
Flexible grid Optical Networks are efficient mechanism to provide flexibility in the optical spectrum utilization. For such networks, the slot width size as specified by the ITU-T G.694.1 is 12.5 GHz. However, one should question if it is the optimal grid size? In this paper, under different bandwidth distribution scenarios, we review which slot size give appropriate spectrum efficiency. Moreover, we present a study of the slot sizes with varying incoming traffic having some bandwidth requirement under different scenarios.
|
2003.12869
|
Chao Yang Mr.
|
Chao Yang, Ser-Nam Lim
|
One-Shot Domain Adaptation For Face Generation
|
Accepted to CVPR 2020
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we propose a framework capable of generating face images that
fall into the same distribution as that of a given one-shot example. We
leverage a pre-trained StyleGAN model that already learned the generic face
distribution. Given the one-shot target, we develop an iterative optimization
scheme that rapidly adapts the weights of the model to shift the output's
high-level distribution to the target's. To generate images of the same
distribution, we introduce a style-mixing technique that transfers the
low-level statistics from the target to faces randomly generated with the
model. With that, we are able to generate an unlimited number of faces that
inherit from the distribution of both generic human faces and the one-shot
example. The newly generated faces can serve as augmented training data for
other downstream tasks. Such setting is appealing as it requires labeling very
few, or even one example, in the target domain, which is often the case of
real-world face manipulations that result from a variety of unknown and unique
distributions, each with extremely low prevalence. We show the effectiveness of
our one-shot approach for detecting face manipulations and compare it with
other few-shot domain adaptation methods qualitatively and quantitatively.
|
[
{
"created": "Sat, 28 Mar 2020 18:50:13 GMT",
"version": "v1"
}
] |
2020-03-31
|
[
[
"Yang",
"Chao",
""
],
[
"Lim",
"Ser-Nam",
""
]
] |
In this paper, we propose a framework capable of generating face images that fall into the same distribution as that of a given one-shot example. We leverage a pre-trained StyleGAN model that already learned the generic face distribution. Given the one-shot target, we develop an iterative optimization scheme that rapidly adapts the weights of the model to shift the output's high-level distribution to the target's. To generate images of the same distribution, we introduce a style-mixing technique that transfers the low-level statistics from the target to faces randomly generated with the model. With that, we are able to generate an unlimited number of faces that inherit from the distribution of both generic human faces and the one-shot example. The newly generated faces can serve as augmented training data for other downstream tasks. Such setting is appealing as it requires labeling very few, or even one example, in the target domain, which is often the case of real-world face manipulations that result from a variety of unknown and unique distributions, each with extremely low prevalence. We show the effectiveness of our one-shot approach for detecting face manipulations and compare it with other few-shot domain adaptation methods qualitatively and quantitatively.
|
1211.6411
|
Mohammed El-Dosuky
|
Mohammed El-Dosuky, Ahmed EL-Bassiouny, Taher Hamza and Magdy Rashad
|
New Heuristics for Interfacing Human Motor System using Brain Waves
| null | null | null | null |
cs.HC cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
There are many new forms of interfacing human users to machines. We persevere
here electric mechanical form of interaction between human and machine. The
emergence of brain-computer interface allows mind-to-movement systems. The
story of the Pied Piper inspired us to devise some new heuristics for
interfacing human motor system using brain waves by combining head helmet and
LumbarMotionMonitor For the simulation we use java GridGain Brain responses of
classified subjects during training indicates that Probe can be the best
stimulus to rely on in distinguishing between knowledgeable and not
knowledgeable
|
[
{
"created": "Sat, 24 Nov 2012 01:21:25 GMT",
"version": "v1"
}
] |
2012-11-28
|
[
[
"El-Dosuky",
"Mohammed",
""
],
[
"EL-Bassiouny",
"Ahmed",
""
],
[
"Hamza",
"Taher",
""
],
[
"Rashad",
"Magdy",
""
]
] |
There are many new forms of interfacing human users to machines. We persevere here electric mechanical form of interaction between human and machine. The emergence of brain-computer interface allows mind-to-movement systems. The story of the Pied Piper inspired us to devise some new heuristics for interfacing human motor system using brain waves by combining head helmet and LumbarMotionMonitor For the simulation we use java GridGain Brain responses of classified subjects during training indicates that Probe can be the best stimulus to rely on in distinguishing between knowledgeable and not knowledgeable
|
2201.11464
|
Christoph Matheja
|
Kevin Batz and Ira Fesefeldt and Marvin Jansen and Joost-Pieter Katoen
and Florian Ke{\ss}ler and Christoph Matheja and Thomas Noll
|
Foundations for Entailment Checking in Quantitative Separation Logic
(extended version)
|
Extended version of ESOP'22 paper
| null | null | null |
cs.LO cs.PL
|
http://creativecommons.org/licenses/by/4.0/
|
Quantitative separation logic (QSL) is an extension of separation logic (SL)
for the verification of probabilistic pointer programs. In QSL, formulae
evaluate to real numbers instead of truth values, e.g., the probability of
memory-safe termination in a given symbolic heap. As with \SL, one of the key
problems when reasoning with QSL is \emph{entailment}: does a formula f entail
another formula g?
We give a generic reduction from entailment checking in QSL to entailment
checking in SL. This allows to leverage the large body of SL research for the
automated verification of probabilistic pointer programs. We analyze the
complexity of our approach and demonstrate its applicability. In particular, we
obtain the first decidability results for the verification of such programs by
applying our reduction to a quantitative extension of the well-known
symbolic-heap fragment of separation logic.
|
[
{
"created": "Thu, 27 Jan 2022 12:07:06 GMT",
"version": "v1"
}
] |
2022-01-28
|
[
[
"Batz",
"Kevin",
""
],
[
"Fesefeldt",
"Ira",
""
],
[
"Jansen",
"Marvin",
""
],
[
"Katoen",
"Joost-Pieter",
""
],
[
"Keßler",
"Florian",
""
],
[
"Matheja",
"Christoph",
""
],
[
"Noll",
"Thomas",
""
]
] |
Quantitative separation logic (QSL) is an extension of separation logic (SL) for the verification of probabilistic pointer programs. In QSL, formulae evaluate to real numbers instead of truth values, e.g., the probability of memory-safe termination in a given symbolic heap. As with \SL, one of the key problems when reasoning with QSL is \emph{entailment}: does a formula f entail another formula g? We give a generic reduction from entailment checking in QSL to entailment checking in SL. This allows to leverage the large body of SL research for the automated verification of probabilistic pointer programs. We analyze the complexity of our approach and demonstrate its applicability. In particular, we obtain the first decidability results for the verification of such programs by applying our reduction to a quantitative extension of the well-known symbolic-heap fragment of separation logic.
|
1612.03412
|
Yochai Blau
|
Yochai Blau and Tomer Michaeli
|
Non-Redundant Spectral Dimensionality Reduction
| null |
European Conference on Machine Learning and Knowledge Discovery in
Databases (ECML PKDD), Part I, LNAI 10534, pp. 256-271, 2017
|
10.1007/978-3-319-71249-9_16
| null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Spectral dimensionality reduction algorithms are widely used in numerous
domains, including for recognition, segmentation, tracking and visualization.
However, despite their popularity, these algorithms suffer from a major
limitation known as the "repeated Eigen-directions" phenomenon. That is, many
of the embedding coordinates they produce typically capture the same direction
along the data manifold. This leads to redundant and inefficient
representations that do not reveal the true intrinsic dimensionality of the
data. In this paper, we propose a general method for avoiding redundancy in
spectral algorithms. Our approach relies on replacing the orthogonality
constraints underlying those methods by unpredictability constraints.
Specifically, we require that each embedding coordinate be unpredictable (in
the statistical sense) from all previous ones. We prove that these constraints
necessarily prevent redundancy, and provide a simple technique to incorporate
them into existing methods. As we illustrate on challenging high-dimensional
scenarios, our approach produces significantly more informative and compact
representations, which improve visualization and classification tasks.
|
[
{
"created": "Sun, 11 Dec 2016 14:04:33 GMT",
"version": "v1"
},
{
"created": "Thu, 20 Apr 2017 12:58:06 GMT",
"version": "v2"
}
] |
2018-01-03
|
[
[
"Blau",
"Yochai",
""
],
[
"Michaeli",
"Tomer",
""
]
] |
Spectral dimensionality reduction algorithms are widely used in numerous domains, including for recognition, segmentation, tracking and visualization. However, despite their popularity, these algorithms suffer from a major limitation known as the "repeated Eigen-directions" phenomenon. That is, many of the embedding coordinates they produce typically capture the same direction along the data manifold. This leads to redundant and inefficient representations that do not reveal the true intrinsic dimensionality of the data. In this paper, we propose a general method for avoiding redundancy in spectral algorithms. Our approach relies on replacing the orthogonality constraints underlying those methods by unpredictability constraints. Specifically, we require that each embedding coordinate be unpredictable (in the statistical sense) from all previous ones. We prove that these constraints necessarily prevent redundancy, and provide a simple technique to incorporate them into existing methods. As we illustrate on challenging high-dimensional scenarios, our approach produces significantly more informative and compact representations, which improve visualization and classification tasks.
|
2006.15313
|
Swarup Chattopadhyay
|
Swarup Chattopadhyay, Debasis Ganguly
|
Community Structure aware Embedding of Nodes in a Network
| null | null | null | null |
cs.SI physics.soc-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Detecting communities or the modular structure of real-life networks (e.g. a
social network or a product purchase network) is an important task because the
way a network functions is often determined by its communities. Traditional
approaches to community detection involve modularity-based algorithms, which
generally speaking, construct partitions based on heuristics that seek to
maximize the ratio of the edges within the partitions to those between them. On
the other hand, node embedding approaches represent each node in a graph as a
real-valued vector and is thereby able to transform the problem of community
detection in a graph to that of clustering a set of vectors. Existing node
embedding approaches are primarily based on, first, initiating random walks
from each node to construct a context of a node, and then make the vector
representation of a node close to its context. However, standard node embedding
approaches do not directly take into account the community structure of a
network while constructing the context around each node. To alleviate this, we
explore two different threads of work. First, we investigate the use of maximum
entropy-based random walks to obtain more centrality preserving embedding of
nodes, which may lead to more effective clusters in the embedded space. Second,
we propose a community structure-aware node embedding approach, where we
incorporate modularity-based partitioning heuristics into the objective
function of node embedding. We demonstrate that our proposed combination of the
combinatorial and the embedding approaches for community detection outperforms
a number of modularity-based baselines and K-means clustering on a standard
node-embedded (node2vec) vector space on a wide range of real-life and
synthetic networks of different sizes and densities.
|
[
{
"created": "Sat, 27 Jun 2020 08:07:21 GMT",
"version": "v1"
}
] |
2020-06-30
|
[
[
"Chattopadhyay",
"Swarup",
""
],
[
"Ganguly",
"Debasis",
""
]
] |
Detecting communities or the modular structure of real-life networks (e.g. a social network or a product purchase network) is an important task because the way a network functions is often determined by its communities. Traditional approaches to community detection involve modularity-based algorithms, which generally speaking, construct partitions based on heuristics that seek to maximize the ratio of the edges within the partitions to those between them. On the other hand, node embedding approaches represent each node in a graph as a real-valued vector and is thereby able to transform the problem of community detection in a graph to that of clustering a set of vectors. Existing node embedding approaches are primarily based on, first, initiating random walks from each node to construct a context of a node, and then make the vector representation of a node close to its context. However, standard node embedding approaches do not directly take into account the community structure of a network while constructing the context around each node. To alleviate this, we explore two different threads of work. First, we investigate the use of maximum entropy-based random walks to obtain more centrality preserving embedding of nodes, which may lead to more effective clusters in the embedded space. Second, we propose a community structure-aware node embedding approach, where we incorporate modularity-based partitioning heuristics into the objective function of node embedding. We demonstrate that our proposed combination of the combinatorial and the embedding approaches for community detection outperforms a number of modularity-based baselines and K-means clustering on a standard node-embedded (node2vec) vector space on a wide range of real-life and synthetic networks of different sizes and densities.
|
2206.09485
|
U\u{g}ur \c{C}o\u{g}alan
|
U\u{g}ur \c{C}o\u{g}alan, Mojtaba Bemana, Hans-Peter Seidel, Karol
Myszkowski
|
Video frame interpolation for high dynamic range sequences captured with
dual-exposure sensors
|
13 pages, 10 figures
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Video frame interpolation (VFI) enables many important applications that
might involve the temporal domain, such as slow motion playback, or the spatial
domain, such as stop motion sequences. We are focusing on the former task,
where one of the key challenges is handling high dynamic range (HDR) scenes in
the presence of complex motion. To this end, we explore possible advantages of
dual-exposure sensors that readily provide sharp short and blurry long
exposures that are spatially registered and whose ends are temporally aligned.
This way, motion blur registers temporally continuous information on the scene
motion that, combined with the sharp reference, enables more precise motion
sampling within a single camera shot. We demonstrate that this facilitates a
more complex motion reconstruction in the VFI task, as well as HDR frame
reconstruction that so far has been considered only for the originally captured
frames, not in-between interpolated frames. We design a neural network trained
in these tasks that clearly outperforms existing solutions. We also propose a
metric for scene motion complexity that provides important insights into the
performance of VFI methods at the test time.
|
[
{
"created": "Sun, 19 Jun 2022 20:29:34 GMT",
"version": "v1"
},
{
"created": "Fri, 3 Feb 2023 10:09:22 GMT",
"version": "v2"
},
{
"created": "Wed, 31 May 2023 13:58:01 GMT",
"version": "v3"
}
] |
2023-06-01
|
[
[
"Çoğalan",
"Uğur",
""
],
[
"Bemana",
"Mojtaba",
""
],
[
"Seidel",
"Hans-Peter",
""
],
[
"Myszkowski",
"Karol",
""
]
] |
Video frame interpolation (VFI) enables many important applications that might involve the temporal domain, such as slow motion playback, or the spatial domain, such as stop motion sequences. We are focusing on the former task, where one of the key challenges is handling high dynamic range (HDR) scenes in the presence of complex motion. To this end, we explore possible advantages of dual-exposure sensors that readily provide sharp short and blurry long exposures that are spatially registered and whose ends are temporally aligned. This way, motion blur registers temporally continuous information on the scene motion that, combined with the sharp reference, enables more precise motion sampling within a single camera shot. We demonstrate that this facilitates a more complex motion reconstruction in the VFI task, as well as HDR frame reconstruction that so far has been considered only for the originally captured frames, not in-between interpolated frames. We design a neural network trained in these tasks that clearly outperforms existing solutions. We also propose a metric for scene motion complexity that provides important insights into the performance of VFI methods at the test time.
|
1505.01448
|
Thomas Leibovici
|
Thomas Leibovici
|
Taking back control of HPC file systems with Robinhood Policy Engine
|
International Workshop on the Lustre Ecosystem: Challenges and
Opportunities, March 2015, Annapolis MD
| null | null | null |
cs.DC cs.OS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Today, the largest Lustre file systems store billions of entries. On such
systems, classic tools based on namespace scanning become unusable. Operations
such as managing file lifetime, scheduling data copies, and generating overall
filesystem statistics become painful as they require collecting, sorting and
aggregating information for billions of records. Robinhood Policy Engine is an
open source software developed to address these challenges. It makes it
possible to schedule automatic actions on huge numbers of filesystem entries.
It also gives a synthetic understanding of file systems contents by providing
overall statistics about data ownership, age and size profiles. Even if it can
be used with any POSIX filesystem, Robinhood supports Lustre specific features
like OSTs, pools, HSM, ChangeLogs, and DNE. It implements specific support for
these features, and takes advantage of them to manage Lustre file systems
efficiently.
|
[
{
"created": "Wed, 6 May 2015 18:14:56 GMT",
"version": "v1"
}
] |
2015-05-07
|
[
[
"Leibovici",
"Thomas",
""
]
] |
Today, the largest Lustre file systems store billions of entries. On such systems, classic tools based on namespace scanning become unusable. Operations such as managing file lifetime, scheduling data copies, and generating overall filesystem statistics become painful as they require collecting, sorting and aggregating information for billions of records. Robinhood Policy Engine is an open source software developed to address these challenges. It makes it possible to schedule automatic actions on huge numbers of filesystem entries. It also gives a synthetic understanding of file systems contents by providing overall statistics about data ownership, age and size profiles. Even if it can be used with any POSIX filesystem, Robinhood supports Lustre specific features like OSTs, pools, HSM, ChangeLogs, and DNE. It implements specific support for these features, and takes advantage of them to manage Lustre file systems efficiently.
|
2004.01353
|
Mihai Sima
|
Ash Luft, Mihai Sima, Michael McGuire
|
Hardware Trojan with Frequency Modulation
| null | null | null | null |
cs.AR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The use of third-party IP cores in implementing applications in FPGAs has
given rise to the threat of malicious alterations through the insertion of
hardware Trojans. To address this threat, it is important to predict the way
hardware Trojans are built and to identify their weaknesses. This paper
describes a logic family for implementing robust hardware Trojans, which can
evade the two major detection methods, namely unused-circuit identification and
side-channel analysis. This robustness is achieved by encoding information in
frequency rather than amplitude so that the Trojan trigger circuitry's state
will never stay constant during 'normal' operation. In addition, the power
consumption of Trojan circuits built using the proposed logic family can be
concealed with minimal design effort and supplementary hardware resources.
Defense measures against hardware Trojans with frequency modulation are
described.
|
[
{
"created": "Fri, 3 Apr 2020 03:17:14 GMT",
"version": "v1"
}
] |
2020-04-06
|
[
[
"Luft",
"Ash",
""
],
[
"Sima",
"Mihai",
""
],
[
"McGuire",
"Michael",
""
]
] |
The use of third-party IP cores in implementing applications in FPGAs has given rise to the threat of malicious alterations through the insertion of hardware Trojans. To address this threat, it is important to predict the way hardware Trojans are built and to identify their weaknesses. This paper describes a logic family for implementing robust hardware Trojans, which can evade the two major detection methods, namely unused-circuit identification and side-channel analysis. This robustness is achieved by encoding information in frequency rather than amplitude so that the Trojan trigger circuitry's state will never stay constant during 'normal' operation. In addition, the power consumption of Trojan circuits built using the proposed logic family can be concealed with minimal design effort and supplementary hardware resources. Defense measures against hardware Trojans with frequency modulation are described.
|
1807.04709
|
Emma Pierson
|
Emma Pierson, Pang Wei Koh, Tatsunori Hashimoto, Daphne Koller, Jure
Leskovec, Nicholas Eriksson, Percy Liang
|
Inferring Multidimensional Rates of Aging from Cross-Sectional Data
|
Accepted at AISTATS 2019
| null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Modeling how individuals evolve over time is a fundamental problem in the
natural and social sciences. However, existing datasets are often
cross-sectional with each individual observed only once, making it impossible
to apply traditional time-series methods. Motivated by the study of human
aging, we present an interpretable latent-variable model that learns temporal
dynamics from cross-sectional data. Our model represents each individual's
features over time as a nonlinear function of a low-dimensional,
linearly-evolving latent state. We prove that when this nonlinear function is
constrained to be order-isomorphic, the model family is identifiable solely
from cross-sectional data provided the distribution of time-independent
variation is known. On the UK Biobank human health dataset, our model
reconstructs the observed data while learning interpretable rates of aging
associated with diseases, mortality, and aging risk factors.
|
[
{
"created": "Thu, 12 Jul 2018 16:27:40 GMT",
"version": "v1"
},
{
"created": "Sun, 11 Nov 2018 21:38:56 GMT",
"version": "v2"
},
{
"created": "Tue, 5 Mar 2019 18:22:09 GMT",
"version": "v3"
}
] |
2019-03-06
|
[
[
"Pierson",
"Emma",
""
],
[
"Koh",
"Pang Wei",
""
],
[
"Hashimoto",
"Tatsunori",
""
],
[
"Koller",
"Daphne",
""
],
[
"Leskovec",
"Jure",
""
],
[
"Eriksson",
"Nicholas",
""
],
[
"Liang",
"Percy",
""
]
] |
Modeling how individuals evolve over time is a fundamental problem in the natural and social sciences. However, existing datasets are often cross-sectional with each individual observed only once, making it impossible to apply traditional time-series methods. Motivated by the study of human aging, we present an interpretable latent-variable model that learns temporal dynamics from cross-sectional data. Our model represents each individual's features over time as a nonlinear function of a low-dimensional, linearly-evolving latent state. We prove that when this nonlinear function is constrained to be order-isomorphic, the model family is identifiable solely from cross-sectional data provided the distribution of time-independent variation is known. On the UK Biobank human health dataset, our model reconstructs the observed data while learning interpretable rates of aging associated with diseases, mortality, and aging risk factors.
|
1206.1105
|
Biao Xiang
|
Biao Xiang, Enhong Chen, Qi Liu, Hui Xiong
|
A Linear Circuit Model For Social Influence Analysis
|
arXiv admin note: substantial text overlap with arXiv:1205.6024
| null | null | null |
cs.SI physics.soc-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Understanding the behaviors of information propagation is essential for the
effective exploitation of social influence in social networks. However, few
existing influence models are both tractable and efficient for describing the
information propagation process and quantitatively measuring social influence.
To this end, in this paper, we develop a linear social influence model, named
Circuit due to its close relation to the circuit network. Based on the
predefined four axioms of social influence, we first demonstrate that our model
can efficiently measure the influence strength between any pair of nodes. Along
this line, an upper bound of the node(s)' influence is identified for potential
use, e.g., reducing the search space. Furthermore, we provide the physical
implication of the Circuit model and also a deep analysis of its relationships
with the existing methods, such as PageRank. Then, we propose that the Circuit
model provides a natural solution to the problems of computing each single
node's authority and finding a set of nodes for social influence maximization.
At last, the effectiveness of the proposed model is evaluated on the real-world
data. The extensive experimental results demonstrate that Circuit model
consistently outperforms the state-of-the-art methods and can greatly alleviate
the computation burden of the influence maximization problem.
|
[
{
"created": "Wed, 6 Jun 2012 01:58:41 GMT",
"version": "v1"
},
{
"created": "Mon, 10 Sep 2012 05:28:53 GMT",
"version": "v2"
}
] |
2012-09-11
|
[
[
"Xiang",
"Biao",
""
],
[
"Chen",
"Enhong",
""
],
[
"Liu",
"Qi",
""
],
[
"Xiong",
"Hui",
""
]
] |
Understanding the behaviors of information propagation is essential for the effective exploitation of social influence in social networks. However, few existing influence models are both tractable and efficient for describing the information propagation process and quantitatively measuring social influence. To this end, in this paper, we develop a linear social influence model, named Circuit due to its close relation to the circuit network. Based on the predefined four axioms of social influence, we first demonstrate that our model can efficiently measure the influence strength between any pair of nodes. Along this line, an upper bound of the node(s)' influence is identified for potential use, e.g., reducing the search space. Furthermore, we provide the physical implication of the Circuit model and also a deep analysis of its relationships with the existing methods, such as PageRank. Then, we propose that the Circuit model provides a natural solution to the problems of computing each single node's authority and finding a set of nodes for social influence maximization. At last, the effectiveness of the proposed model is evaluated on the real-world data. The extensive experimental results demonstrate that Circuit model consistently outperforms the state-of-the-art methods and can greatly alleviate the computation burden of the influence maximization problem.
|
1708.07878
|
Rui Wang
|
Rui Wang, Martin Schw\"orer, Daniel Cremers
|
Stereo DSO: Large-Scale Direct Sparse Visual Odometry with Stereo
Cameras
|
ICCV 2017
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose Stereo Direct Sparse Odometry (Stereo DSO) as a novel method for
highly accurate real-time visual odometry estimation of large-scale
environments from stereo cameras. It jointly optimizes for all the model
parameters within the active window, including the intrinsic/extrinsic camera
parameters of all keyframes and the depth values of all selected pixels. In
particular, we propose a novel approach to integrate constraints from static
stereo into the bundle adjustment pipeline of temporal multi-view stereo.
Real-time optimization is realized by sampling pixels uniformly from image
regions with sufficient intensity gradient. Fixed-baseline stereo resolves
scale drift. It also reduces the sensitivities to large optical flow and to
rolling shutter effect which are known shortcomings of direct image alignment
methods. Quantitative evaluation demonstrates that the proposed Stereo DSO
outperforms existing state-of-the-art visual odometry methods both in terms of
tracking accuracy and robustness. Moreover, our method delivers a more precise
metric 3D reconstruction than previous dense/semi-dense direct approaches while
providing a higher reconstruction density than feature-based methods.
|
[
{
"created": "Fri, 25 Aug 2017 20:50:54 GMT",
"version": "v1"
}
] |
2017-08-29
|
[
[
"Wang",
"Rui",
""
],
[
"Schwörer",
"Martin",
""
],
[
"Cremers",
"Daniel",
""
]
] |
We propose Stereo Direct Sparse Odometry (Stereo DSO) as a novel method for highly accurate real-time visual odometry estimation of large-scale environments from stereo cameras. It jointly optimizes for all the model parameters within the active window, including the intrinsic/extrinsic camera parameters of all keyframes and the depth values of all selected pixels. In particular, we propose a novel approach to integrate constraints from static stereo into the bundle adjustment pipeline of temporal multi-view stereo. Real-time optimization is realized by sampling pixels uniformly from image regions with sufficient intensity gradient. Fixed-baseline stereo resolves scale drift. It also reduces the sensitivities to large optical flow and to rolling shutter effect which are known shortcomings of direct image alignment methods. Quantitative evaluation demonstrates that the proposed Stereo DSO outperforms existing state-of-the-art visual odometry methods both in terms of tracking accuracy and robustness. Moreover, our method delivers a more precise metric 3D reconstruction than previous dense/semi-dense direct approaches while providing a higher reconstruction density than feature-based methods.
|
1902.05176
|
Ashis Banerjee
|
Behnoosh Parsa, Ekta U. Samani, Rose Hendrix, Cameron Devine, Shashi
M. Singh, Santosh Devasia, and Ashis G. Banerjee
|
Toward Ergonomic Risk Prediction via Segmentation of Indoor Object
Manipulation Actions Using Spatiotemporal Convolutional Networks
| null | null |
10.1109/LRA.2019.2925305
| null |
cs.RO cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Automated real-time prediction of the ergonomic risks of manipulating objects
is a key unsolved challenge in developing effective human-robot collaboration
systems for logistics and manufacturing applications. We present a foundational
paradigm to address this challenge by formulating the problem as one of action
segmentation from RGB-D camera videos. Spatial features are first learned using
a deep convolutional model from the video frames, which are then fed
sequentially to temporal convolutional networks to semantically segment the
frames into a hierarchy of actions, which are either ergonomically safe,
require monitoring, or need immediate attention. For performance evaluation, in
addition to an open-source kitchen dataset, we collected a new dataset
comprising twenty individuals picking up and placing objects of varying weights
to and from cabinet and table locations at various heights. Results show very
high (87-94)\% F1 overlap scores among the ground truth and predicted frame
labels for videos lasting over two minutes and consisting of a large number of
actions.
|
[
{
"created": "Thu, 14 Feb 2019 00:53:07 GMT",
"version": "v1"
},
{
"created": "Wed, 26 Jun 2019 04:39:37 GMT",
"version": "v2"
}
] |
2019-06-27
|
[
[
"Parsa",
"Behnoosh",
""
],
[
"Samani",
"Ekta U.",
""
],
[
"Hendrix",
"Rose",
""
],
[
"Devine",
"Cameron",
""
],
[
"Singh",
"Shashi M.",
""
],
[
"Devasia",
"Santosh",
""
],
[
"Banerjee",
"Ashis G.",
""
]
] |
Automated real-time prediction of the ergonomic risks of manipulating objects is a key unsolved challenge in developing effective human-robot collaboration systems for logistics and manufacturing applications. We present a foundational paradigm to address this challenge by formulating the problem as one of action segmentation from RGB-D camera videos. Spatial features are first learned using a deep convolutional model from the video frames, which are then fed sequentially to temporal convolutional networks to semantically segment the frames into a hierarchy of actions, which are either ergonomically safe, require monitoring, or need immediate attention. For performance evaluation, in addition to an open-source kitchen dataset, we collected a new dataset comprising twenty individuals picking up and placing objects of varying weights to and from cabinet and table locations at various heights. Results show very high (87-94)\% F1 overlap scores among the ground truth and predicted frame labels for videos lasting over two minutes and consisting of a large number of actions.
|
2311.11638
|
Chunming He
|
Chunming He, Chengyu Fang, Yulun Zhang, Tian Ye, Kai Li, Longxiang
Tang, Zhenhua Guo, Xiu Li, Sina Farsiu
|
Reti-Diff: Illumination Degradation Image Restoration with Retinex-based
Latent Diffusion Model
|
20 pages, 11 figures, 11 tables
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Illumination degradation image restoration (IDIR) techniques aim to improve
the visibility of degraded images and mitigate the adverse effects of
deteriorated illumination. Among these algorithms, diffusion model (DM)-based
methods have shown promising performance but are often burdened by heavy
computational demands and pixel misalignment issues when predicting the
image-level distribution. To tackle these problems, we propose to leverage DM
within a compact latent space to generate concise guidance priors and introduce
a novel solution called Reti-Diff for the IDIR task. Reti-Diff comprises two
key components: the Retinex-based latent DM (RLDM) and the Retinex-guided
transformer (RGformer). To ensure detailed reconstruction and illumination
correction, RLDM is empowered to acquire Retinex knowledge and extract
reflectance and illumination priors. These priors are subsequently utilized by
RGformer to guide the decomposition of image features into their respective
reflectance and illumination components. Following this, RGformer further
enhances and consolidates the decomposed features, resulting in the production
of refined images with consistent content and robustness to handle complex
degradation scenarios. Extensive experiments show that Reti-Diff outperforms
existing methods on three IDIR tasks, as well as downstream applications. Code
will be available at \url{https://github.com/ChunmingHe/Reti-Diff}.
|
[
{
"created": "Mon, 20 Nov 2023 09:55:06 GMT",
"version": "v1"
},
{
"created": "Sat, 9 Mar 2024 07:59:41 GMT",
"version": "v2"
}
] |
2024-03-12
|
[
[
"He",
"Chunming",
""
],
[
"Fang",
"Chengyu",
""
],
[
"Zhang",
"Yulun",
""
],
[
"Ye",
"Tian",
""
],
[
"Li",
"Kai",
""
],
[
"Tang",
"Longxiang",
""
],
[
"Guo",
"Zhenhua",
""
],
[
"Li",
"Xiu",
""
],
[
"Farsiu",
"Sina",
""
]
] |
Illumination degradation image restoration (IDIR) techniques aim to improve the visibility of degraded images and mitigate the adverse effects of deteriorated illumination. Among these algorithms, diffusion model (DM)-based methods have shown promising performance but are often burdened by heavy computational demands and pixel misalignment issues when predicting the image-level distribution. To tackle these problems, we propose to leverage DM within a compact latent space to generate concise guidance priors and introduce a novel solution called Reti-Diff for the IDIR task. Reti-Diff comprises two key components: the Retinex-based latent DM (RLDM) and the Retinex-guided transformer (RGformer). To ensure detailed reconstruction and illumination correction, RLDM is empowered to acquire Retinex knowledge and extract reflectance and illumination priors. These priors are subsequently utilized by RGformer to guide the decomposition of image features into their respective reflectance and illumination components. Following this, RGformer further enhances and consolidates the decomposed features, resulting in the production of refined images with consistent content and robustness to handle complex degradation scenarios. Extensive experiments show that Reti-Diff outperforms existing methods on three IDIR tasks, as well as downstream applications. Code will be available at \url{https://github.com/ChunmingHe/Reti-Diff}.
|
1206.3599
|
Siddhartha Banerjee
|
Siddhartha Banerjee, Aditya Gopalan, Abhik Kumar Das, Sanjay
Shakkottai
|
Epidemic Spreading with External Agents
| null | null |
10.1109/TIT.2014.2316801
| null |
cs.SI cs.IT cs.NI math.IT physics.soc-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study epidemic spreading processes in large networks, when the spread is
assisted by a small number of external agents: infection sources with bounded
spreading power, but whose movement is unrestricted vis-\`a-vis the underlying
network topology. For networks which are `spatially constrained', we show that
the spread of infection can be significantly speeded up even by a few such
external agents infecting randomly. Moreover, for general networks, we derive
upper-bounds on the order of the spreading time achieved by certain simple
(random/greedy) external-spreading policies. Conversely, for certain common
classes of networks such as line graphs, grids and random geometric graphs, we
also derive lower bounds on the order of the spreading time over all
(potentially network-state aware and adversarial) external-spreading policies;
these adversarial lower bounds match (up to logarithmic factors) the spreading
time achieved by an external agent with a random spreading policy. This
demonstrates that random, state-oblivious infection-spreading by an external
agent is in fact order-wise optimal for spreading in such spatially constrained
networks.
|
[
{
"created": "Fri, 15 Jun 2012 21:10:28 GMT",
"version": "v1"
},
{
"created": "Sun, 13 Apr 2014 04:20:15 GMT",
"version": "v2"
}
] |
2014-04-15
|
[
[
"Banerjee",
"Siddhartha",
""
],
[
"Gopalan",
"Aditya",
""
],
[
"Das",
"Abhik Kumar",
""
],
[
"Shakkottai",
"Sanjay",
""
]
] |
We study epidemic spreading processes in large networks, when the spread is assisted by a small number of external agents: infection sources with bounded spreading power, but whose movement is unrestricted vis-\`a-vis the underlying network topology. For networks which are `spatially constrained', we show that the spread of infection can be significantly speeded up even by a few such external agents infecting randomly. Moreover, for general networks, we derive upper-bounds on the order of the spreading time achieved by certain simple (random/greedy) external-spreading policies. Conversely, for certain common classes of networks such as line graphs, grids and random geometric graphs, we also derive lower bounds on the order of the spreading time over all (potentially network-state aware and adversarial) external-spreading policies; these adversarial lower bounds match (up to logarithmic factors) the spreading time achieved by an external agent with a random spreading policy. This demonstrates that random, state-oblivious infection-spreading by an external agent is in fact order-wise optimal for spreading in such spatially constrained networks.
|
2109.07436
|
Sriram Gopalakrishnan
|
Sriram Gopalakrishnan, Mudit Verma, Subbarao Kambhampati
|
Computing Policies That Account For The Effects Of Human Agent
Uncertainty During Execution In Markov Decision Processes
|
7 page paper, 6 pages supplemental material
| null | null | null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
When humans are given a policy to execute, there can be policy execution
errors and deviations in policy if there is uncertainty in identifying a state.
This can happen due to the human agent's cognitive limitations and/or
perceptual errors. So an algorithm that computes a policy for a human to
execute ought to consider these effects in its computations. An optimal Markov
Decision Process (MDP) policy that is poorly executed (because of a human
agent) maybe much worse than another policy that is suboptimal in the MDP, but
considers the human-agent's execution behavior. In this paper we consider two
problems that arise from state uncertainty; these are erroneous
state-inference, and extra-sensing actions that a person might take as a result
of their uncertainty. We present a framework to model the human agent's
behavior with respect to state uncertainty, and can be used to compute MDP
policies that accounts for these problems. This is followed by a hill climbing
algorithm to search for good policies given our model of the human agent. We
also present a branch and bound algorithm which can find the optimal policy for
such problems. We show experimental results in a Gridworld domain, and
warehouse-worker domain. Finally, we present human-subject studies that support
our human model assumptions.
|
[
{
"created": "Wed, 15 Sep 2021 17:10:46 GMT",
"version": "v1"
},
{
"created": "Mon, 20 Sep 2021 21:24:20 GMT",
"version": "v2"
},
{
"created": "Thu, 3 Mar 2022 22:00:30 GMT",
"version": "v3"
}
] |
2022-03-07
|
[
[
"Gopalakrishnan",
"Sriram",
""
],
[
"Verma",
"Mudit",
""
],
[
"Kambhampati",
"Subbarao",
""
]
] |
When humans are given a policy to execute, there can be policy execution errors and deviations in policy if there is uncertainty in identifying a state. This can happen due to the human agent's cognitive limitations and/or perceptual errors. So an algorithm that computes a policy for a human to execute ought to consider these effects in its computations. An optimal Markov Decision Process (MDP) policy that is poorly executed (because of a human agent) maybe much worse than another policy that is suboptimal in the MDP, but considers the human-agent's execution behavior. In this paper we consider two problems that arise from state uncertainty; these are erroneous state-inference, and extra-sensing actions that a person might take as a result of their uncertainty. We present a framework to model the human agent's behavior with respect to state uncertainty, and can be used to compute MDP policies that accounts for these problems. This is followed by a hill climbing algorithm to search for good policies given our model of the human agent. We also present a branch and bound algorithm which can find the optimal policy for such problems. We show experimental results in a Gridworld domain, and warehouse-worker domain. Finally, we present human-subject studies that support our human model assumptions.
|
1904.12099
|
Jiaqi Yang
|
Jiaqi Yang, Chen Zhao, Ke Xian, Angfan Zhu, Zhiguo Cao
|
Learning to Fuse Local Geometric Features for 3D Rigid Data Matching
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents a simple yet very effective data-driven approach to fuse
both low-level and high-level local geometric features for 3D rigid data
matching. It is a common practice to generate distinctive geometric descriptors
by fusing low-level features from various viewpoints or subspaces, or enhance
geometric feature matching by leveraging multiple high-level features. In prior
works, they are typically performed via linear operations such as concatenation
and min pooling. We show that more compact and distinctive representations can
be achieved by optimizing a neural network (NN) model under the triplet
framework that non-linearly fuses local geometric features in Euclidean spaces.
The NN model is trained by an improved triplet loss function that fully
leverages all pairwise relationships within the triplet. Moreover, the fused
descriptor by our approach is also competitive to deep learned descriptors from
raw data while being more lightweight and rotational invariant. Experimental
results on four standard datasets with various data modalities and application
contexts confirm the advantages of our approach in terms of both feature
matching and geometric registration.
|
[
{
"created": "Sat, 27 Apr 2019 03:23:21 GMT",
"version": "v1"
}
] |
2019-04-30
|
[
[
"Yang",
"Jiaqi",
""
],
[
"Zhao",
"Chen",
""
],
[
"Xian",
"Ke",
""
],
[
"Zhu",
"Angfan",
""
],
[
"Cao",
"Zhiguo",
""
]
] |
This paper presents a simple yet very effective data-driven approach to fuse both low-level and high-level local geometric features for 3D rigid data matching. It is a common practice to generate distinctive geometric descriptors by fusing low-level features from various viewpoints or subspaces, or enhance geometric feature matching by leveraging multiple high-level features. In prior works, they are typically performed via linear operations such as concatenation and min pooling. We show that more compact and distinctive representations can be achieved by optimizing a neural network (NN) model under the triplet framework that non-linearly fuses local geometric features in Euclidean spaces. The NN model is trained by an improved triplet loss function that fully leverages all pairwise relationships within the triplet. Moreover, the fused descriptor by our approach is also competitive to deep learned descriptors from raw data while being more lightweight and rotational invariant. Experimental results on four standard datasets with various data modalities and application contexts confirm the advantages of our approach in terms of both feature matching and geometric registration.
|
1803.04722
|
Xiao Song
|
Xiao Song, Xu Zhao, Tianwei Lin
|
Face Spoofing Detection by Fusing Binocular Depth and Spatial Pyramid
Coding Micro-Texture Features
|
5 pages, 2 figures, accepted by 2017 IEEE International Conference on
Image Processing (ICIP)
| null |
10.1109/ICIP.2017.8296250
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Robust features are of vital importance to face spoofing detection, because
various situations make feature space extremely complicated to partition. Thus
in this paper, two novel and robust features for anti-spoofing are proposed.
The first one is a binocular camera based depth feature called Template Face
Matched Binocular Depth (TFBD) feature. The second one is a high-level
micro-texture based feature called Spatial Pyramid Coding Micro-Texture (SPMT)
feature. Novel template face registration algorithm and spatial pyramid coding
algorithm are also introduced along with the two novel features. Multi-modal
face spoofing detection is implemented based on these two robust features.
Experiments are conducted on a widely used dataset and a comprehensive dataset
constructed by ourselves. The results reveal that face spoofing detection with
the fusion of our proposed features is of strong robustness and time
efficiency, meanwhile outperforming other state-of-the-art traditional methods.
|
[
{
"created": "Tue, 13 Mar 2018 10:49:45 GMT",
"version": "v1"
}
] |
2018-03-14
|
[
[
"Song",
"Xiao",
""
],
[
"Zhao",
"Xu",
""
],
[
"Lin",
"Tianwei",
""
]
] |
Robust features are of vital importance to face spoofing detection, because various situations make feature space extremely complicated to partition. Thus in this paper, two novel and robust features for anti-spoofing are proposed. The first one is a binocular camera based depth feature called Template Face Matched Binocular Depth (TFBD) feature. The second one is a high-level micro-texture based feature called Spatial Pyramid Coding Micro-Texture (SPMT) feature. Novel template face registration algorithm and spatial pyramid coding algorithm are also introduced along with the two novel features. Multi-modal face spoofing detection is implemented based on these two robust features. Experiments are conducted on a widely used dataset and a comprehensive dataset constructed by ourselves. The results reveal that face spoofing detection with the fusion of our proposed features is of strong robustness and time efficiency, meanwhile outperforming other state-of-the-art traditional methods.
|
2304.02712
|
Baidyanath Kundu
|
Baidyanath Kundu (1 and 2), Vassil Vassilev (1 and 2), Wim Lavrijsen
(3) ((1) European Council for Nuclear Research, (2) Princeton University
(US), (3) LBNL (US))
|
Efficient and Accurate Automatic Python Bindings with cppyy & Cling
|
7 pages, 3 figures, 1 table; submitted to ACAT 2022 proceedings
| null | null | null |
cs.PL
|
http://creativecommons.org/licenses/by/4.0/
|
The simplicity of Python and the power of C++ force stark choices on a
scientific software stack. There have been multiple developments to mitigate
language boundaries by implementing language bindings, but the impedance
mismatch between the static nature of C++ and the dynamic one of Python hinders
their implementation; examples include the use of user-defined Python types
with templated C++ and advanced memory management.
The development of the C++ interpreter Cling has changed the way we can think
of language bindings as it provides an incremental compilation infrastructure
available at runtime. That is, Python can interrogate C++ on demand, and
bindings can be lazily constructed at runtime. This automatic binding provision
requires no direct support from library authors and offers better performance
than alternative solutions, such as PyBind11. ROOT pioneered this approach with
PyROOT, which was later enhanced with its successor, cppyy. However, until now,
cppyy relied on the reflection layer of ROOT, which is limited in terms of
provided features and performance.
This paper presents the next step for language interoperability with cppyy,
enabling research into uniform cross-language execution environments and
boosting optimization opportunities across language boundaries. We illustrate
the use of advanced C++ in Numba-accelerated Python through cppyy. We outline a
path forward for re-engineering parts of cppyy to use upstream LLVM components
to improve performance and sustainability. We demonstrate cppyy purely based on
a C++ reflection library, InterOp, which offers interoperability primitives
based on Cling and Clang-Repl.
|
[
{
"created": "Wed, 5 Apr 2023 19:12:05 GMT",
"version": "v1"
}
] |
2023-04-07
|
[
[
"Kundu",
"Baidyanath",
"",
"1 and 2"
],
[
"Vassilev",
"Vassil",
"",
"1 and 2"
],
[
"Lavrijsen",
"Wim",
""
]
] |
The simplicity of Python and the power of C++ force stark choices on a scientific software stack. There have been multiple developments to mitigate language boundaries by implementing language bindings, but the impedance mismatch between the static nature of C++ and the dynamic one of Python hinders their implementation; examples include the use of user-defined Python types with templated C++ and advanced memory management. The development of the C++ interpreter Cling has changed the way we can think of language bindings as it provides an incremental compilation infrastructure available at runtime. That is, Python can interrogate C++ on demand, and bindings can be lazily constructed at runtime. This automatic binding provision requires no direct support from library authors and offers better performance than alternative solutions, such as PyBind11. ROOT pioneered this approach with PyROOT, which was later enhanced with its successor, cppyy. However, until now, cppyy relied on the reflection layer of ROOT, which is limited in terms of provided features and performance. This paper presents the next step for language interoperability with cppyy, enabling research into uniform cross-language execution environments and boosting optimization opportunities across language boundaries. We illustrate the use of advanced C++ in Numba-accelerated Python through cppyy. We outline a path forward for re-engineering parts of cppyy to use upstream LLVM components to improve performance and sustainability. We demonstrate cppyy purely based on a C++ reflection library, InterOp, which offers interoperability primitives based on Cling and Clang-Repl.
|
1504.00854
|
David Powers
|
David M. W. Powers
|
Evaluation Evaluation a Monte Carlo study
|
5 pages, 14 Equations, 2 Figures, 1 Table, as submitted to European
Conference on Artificial Intelligence (shorter version published with 2
pages, 4 Equations, 0 Figures, 1 Table)
|
ECAI 2008, pp.843-844
| null | null |
cs.AI cs.CL stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Over the last decade there has been increasing concern about the biases
embodied in traditional evaluation methods for Natural Language
Processing/Learning, particularly methods borrowed from Information Retrieval.
Without knowledge of the Bias and Prevalence of the contingency being tested,
or equivalently the expectation due to chance, the simple conditional
probabilities Recall, Precision and Accuracy are not meaningful as evaluation
measures, either individually or in combinations such as F-factor. The
existence of bias in NLP measures leads to the 'improvement' of systems by
increasing their bias, such as the practice of improving tagging and parsing
scores by using most common value (e.g. water is always a Noun) rather than the
attempting to discover the correct one. The measures Cohen Kappa and Powers
Informedness are discussed as unbiased alternative to Recall and related to the
psychologically significant measure DeltaP. In this paper we will analyze both
biased and unbiased measures theoretically, characterizing the precise
relationship between all these measures as well as evaluating the evaluation
measures themselves empirically using a Monte Carlo simulation.
|
[
{
"created": "Fri, 3 Apr 2015 14:46:29 GMT",
"version": "v1"
}
] |
2015-04-06
|
[
[
"Powers",
"David M. W.",
""
]
] |
Over the last decade there has been increasing concern about the biases embodied in traditional evaluation methods for Natural Language Processing/Learning, particularly methods borrowed from Information Retrieval. Without knowledge of the Bias and Prevalence of the contingency being tested, or equivalently the expectation due to chance, the simple conditional probabilities Recall, Precision and Accuracy are not meaningful as evaluation measures, either individually or in combinations such as F-factor. The existence of bias in NLP measures leads to the 'improvement' of systems by increasing their bias, such as the practice of improving tagging and parsing scores by using most common value (e.g. water is always a Noun) rather than the attempting to discover the correct one. The measures Cohen Kappa and Powers Informedness are discussed as unbiased alternative to Recall and related to the psychologically significant measure DeltaP. In this paper we will analyze both biased and unbiased measures theoretically, characterizing the precise relationship between all these measures as well as evaluating the evaluation measures themselves empirically using a Monte Carlo simulation.
|
1405.7895
|
Kemiha Mina melle
|
Mina Kemiha
|
Empirical mode decomposition and normalshrink tresholding for speech
denoising
|
8 pages, 6 figures
|
IJIT Journal 3( 2), 2014, pp 27-35
| null | null |
cs.IT cs.SY math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper a signal denoising scheme based on Empirical mode decomposition
(EMD) is presented. The denoising method is a fully data driven approach. Noisy
signal is decomposed adaptively into intrinsic oscillatory components called
Intrinsic mode functions (IMFs) using a decomposition algorithm called sifting
process. The basic principle of the method is to decompose a speech signal into
segments each frame is categorised as either signal-dominant or noise-dominant
then reconstruct the signal with IMFs signal dominant frame previously filtered
or thresholded. It is shown, on the basis of intensive simulations that EMD
improves the signal to noise ratio and address the problem of signal
degradation. The denoising method is applied to real signal with different
noise levels and the results compared to Winner and universal threshold of
DONOHO and JOHNSTONE [11] with soft and hard tresholding. The effect of level
noise value on the performances of the proposed denoising is analysed. The
study is limited to signals corrupted by additive white Gaussian random noise.
|
[
{
"created": "Thu, 8 May 2014 13:12:31 GMT",
"version": "v1"
}
] |
2014-06-02
|
[
[
"Kemiha",
"Mina",
""
]
] |
In this paper a signal denoising scheme based on Empirical mode decomposition (EMD) is presented. The denoising method is a fully data driven approach. Noisy signal is decomposed adaptively into intrinsic oscillatory components called Intrinsic mode functions (IMFs) using a decomposition algorithm called sifting process. The basic principle of the method is to decompose a speech signal into segments each frame is categorised as either signal-dominant or noise-dominant then reconstruct the signal with IMFs signal dominant frame previously filtered or thresholded. It is shown, on the basis of intensive simulations that EMD improves the signal to noise ratio and address the problem of signal degradation. The denoising method is applied to real signal with different noise levels and the results compared to Winner and universal threshold of DONOHO and JOHNSTONE [11] with soft and hard tresholding. The effect of level noise value on the performances of the proposed denoising is analysed. The study is limited to signals corrupted by additive white Gaussian random noise.
|
2002.03182
|
Zafaryab Rasool
|
Zafaryab Rasool, Rui Zhou, Lu Chen, Chengfei Liu, Jiajie Xu
|
Index-based Solutions for Efficient Density Peak Clustering
| null | null | null | null |
cs.DB cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Density Peak Clustering (DPC), a popular density-based clustering approach,
has received considerable attention from the research community primarily due
to its simplicity and fewer-parameter requirement. However, the resultant
clusters obtained using DPC are influenced by the sensitive parameter $d_c$,
which depends on data distribution and requirements of different users.
Besides, the original DPC algorithm requires visiting a large number of
objects, making it slow. To this end, this paper investigates index-based
solutions for DPC. Specifically, we propose two list-based index methods viz.
(i) a simple List Index, and (ii) an advanced Cumulative Histogram Index.
Efficient query algorithms are proposed for these indices which significantly
avoids irrelevant comparisons at the cost of space. For memory-constrained
systems, we further introduce an approximate solution to the above indices
which allows substantial reduction in the space cost, provided that slight
inaccuracies are admissible. Furthermore, owing to considerably lower memory
requirements of existing tree-based index structures, we also present effective
pruning techniques and efficient query algorithms to support DPC using the
popular Quadtree Index and R-tree Index. Finally, we practically evaluate all
the above indices and present the findings and results, obtained from a set of
extensive experiments on six synthetic and real datasets. The experimental
insights obtained can help to guide in selecting a befitting index.
|
[
{
"created": "Sat, 8 Feb 2020 15:22:37 GMT",
"version": "v1"
},
{
"created": "Thu, 23 Jul 2020 02:08:44 GMT",
"version": "v2"
}
] |
2020-07-24
|
[
[
"Rasool",
"Zafaryab",
""
],
[
"Zhou",
"Rui",
""
],
[
"Chen",
"Lu",
""
],
[
"Liu",
"Chengfei",
""
],
[
"Xu",
"Jiajie",
""
]
] |
Density Peak Clustering (DPC), a popular density-based clustering approach, has received considerable attention from the research community primarily due to its simplicity and fewer-parameter requirement. However, the resultant clusters obtained using DPC are influenced by the sensitive parameter $d_c$, which depends on data distribution and requirements of different users. Besides, the original DPC algorithm requires visiting a large number of objects, making it slow. To this end, this paper investigates index-based solutions for DPC. Specifically, we propose two list-based index methods viz. (i) a simple List Index, and (ii) an advanced Cumulative Histogram Index. Efficient query algorithms are proposed for these indices which significantly avoids irrelevant comparisons at the cost of space. For memory-constrained systems, we further introduce an approximate solution to the above indices which allows substantial reduction in the space cost, provided that slight inaccuracies are admissible. Furthermore, owing to considerably lower memory requirements of existing tree-based index structures, we also present effective pruning techniques and efficient query algorithms to support DPC using the popular Quadtree Index and R-tree Index. Finally, we practically evaluate all the above indices and present the findings and results, obtained from a set of extensive experiments on six synthetic and real datasets. The experimental insights obtained can help to guide in selecting a befitting index.
|
1703.03315
|
Stephane Devismes
|
St\'ephane Devismes, David Ilcinkas (LaBRI), Colette Johnen (LaBRI)
|
Self-Stabilizing Disconnected Components Detection and Rooted
Shortest-Path Tree Maintenance in Polynomial Steps
| null |
Discrete Mathematics and Theoretical Computer Science, DMTCS,
2017, ISS, pp.14 - 14
| null | null |
cs.DC cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We deal with the problem of maintaining a shortest-path tree rooted at some
process r in a network that may be disconnected after topological changes. The
goal is then to maintain a shortest-path tree rooted at r in its connected
component, V\_r, and make all processes of other components detecting that r is
not part of their connected component. We propose, in the composite atomicity
model, a silent self-stabilizing algorithm for this problem working in
semi-anonymous networks, where edges have strictly positive weights. This
algorithm does not require any a priori knowledge about global parameters of
the network. We prove its correctness assuming the distributed unfair daemon,
the most general daemon. Its stabilization time in rounds is at most 3nmax+D,
where nmax is the maximum number of non-root processes in a connected component
and D is the hop-diameter of V\_r. Furthermore, if we additionally assume that
edge weights are positive integers, then it stabilizes in a polynomial number
of steps: namely, we exhibit a bound in O(maxi nmax^3 n), where maxi is the
maximum weight of an edge and n is the number of processes.
|
[
{
"created": "Thu, 9 Mar 2017 16:04:37 GMT",
"version": "v1"
},
{
"created": "Mon, 21 Aug 2017 13:17:15 GMT",
"version": "v2"
},
{
"created": "Thu, 30 Nov 2017 13:31:28 GMT",
"version": "v3"
}
] |
2017-12-01
|
[
[
"Devismes",
"Stéphane",
"",
"LaBRI"
],
[
"Ilcinkas",
"David",
"",
"LaBRI"
],
[
"Johnen",
"Colette",
"",
"LaBRI"
]
] |
We deal with the problem of maintaining a shortest-path tree rooted at some process r in a network that may be disconnected after topological changes. The goal is then to maintain a shortest-path tree rooted at r in its connected component, V\_r, and make all processes of other components detecting that r is not part of their connected component. We propose, in the composite atomicity model, a silent self-stabilizing algorithm for this problem working in semi-anonymous networks, where edges have strictly positive weights. This algorithm does not require any a priori knowledge about global parameters of the network. We prove its correctness assuming the distributed unfair daemon, the most general daemon. Its stabilization time in rounds is at most 3nmax+D, where nmax is the maximum number of non-root processes in a connected component and D is the hop-diameter of V\_r. Furthermore, if we additionally assume that edge weights are positive integers, then it stabilizes in a polynomial number of steps: namely, we exhibit a bound in O(maxi nmax^3 n), where maxi is the maximum weight of an edge and n is the number of processes.
|
0811.0419
|
Xiaochuan Zhao
|
Xiaochuan Zhao (1), Tao Peng (1), Ming Yang (1) and Wenbo Wang (1)
((1) Beijing University of Posts and Telecommunications, Beijing, China)
|
Doppler Spread Estimation by Subspace Tracking for OFDM Systems
|
5 pages, 3 figures, To appear in IEEE GLOBECOM'08
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper proposes a novel maximum Doppler spread estimation algorithm for
OFDM systems with the comb-type pilot pattern. By tracking the drifting delay
subspace of the multipath channel, the time correlation function is measured at
a high accuracy, which accordingly improves the estimation accuracy of the
maximum Doppler spread considerably.
|
[
{
"created": "Tue, 4 Nov 2008 03:30:20 GMT",
"version": "v1"
}
] |
2008-11-05
|
[
[
"Zhao",
"Xiaochuan",
"",
"Beijing University of Posts and Telecommunications, Beijing, China"
],
[
"Peng",
"Tao",
"",
"Beijing University of Posts and Telecommunications, Beijing, China"
],
[
"Yang",
"Ming",
"",
"Beijing University of Posts and Telecommunications, Beijing, China"
],
[
"Wang",
"Wenbo",
"",
"Beijing University of Posts and Telecommunications, Beijing, China"
]
] |
This paper proposes a novel maximum Doppler spread estimation algorithm for OFDM systems with the comb-type pilot pattern. By tracking the drifting delay subspace of the multipath channel, the time correlation function is measured at a high accuracy, which accordingly improves the estimation accuracy of the maximum Doppler spread considerably.
|
1003.5510
|
Green Daily
|
Claude Castelluccia and Emiliano De Cristofaro and Aurelien Francillon
and Mohamed-Ali Kaafar
|
EphPub: Toward Robust Ephemeral Publishing
|
Proceedings of IEEE ICNP 2011
| null | null | null |
cs.CR cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The increasing amount of personal and sensitive information disseminated over
the Internet prompts commensurately growing privacy concerns. Digital data
often lingers indefinitely and users lose its control. This motivates the
desire to restrict content availability to an expiration time set by the data
owner. This paper presents and formalizes the notion of Ephemeral Publishing
(EphPub), to prevent the access to expired content. We propose an efficient and
robust protocol that builds on the Domain Name System (DNS) and its caching
mechanism. With EphPub, sensitive content is published encrypted and the key
material is distributed, in a steganographic manner, to randomly selected and
independent resolvers. The availability of content is then limited by the
evanescence of DNS cache entries. The EphPub protocol is transparent to
existing applications, and does not rely on trusted hardware, centralized
servers, or user proactive actions. We analyze its robustness and show that it
incurs a negligible overhead on the DNS infrastructure. We also perform a
large-scale study of the caching behavior of 900K open DNS resolvers. Finally,
we propose Firefox and Thunderbird extensions that provide ephemeral publishing
capabilities, as well as a command-line tool to create ephemeral files.
|
[
{
"created": "Mon, 29 Mar 2010 12:00:08 GMT",
"version": "v1"
},
{
"created": "Tue, 5 Oct 2010 12:21:14 GMT",
"version": "v2"
},
{
"created": "Tue, 18 Oct 2011 16:44:24 GMT",
"version": "v3"
}
] |
2015-03-13
|
[
[
"Castelluccia",
"Claude",
""
],
[
"De Cristofaro",
"Emiliano",
""
],
[
"Francillon",
"Aurelien",
""
],
[
"Kaafar",
"Mohamed-Ali",
""
]
] |
The increasing amount of personal and sensitive information disseminated over the Internet prompts commensurately growing privacy concerns. Digital data often lingers indefinitely and users lose its control. This motivates the desire to restrict content availability to an expiration time set by the data owner. This paper presents and formalizes the notion of Ephemeral Publishing (EphPub), to prevent the access to expired content. We propose an efficient and robust protocol that builds on the Domain Name System (DNS) and its caching mechanism. With EphPub, sensitive content is published encrypted and the key material is distributed, in a steganographic manner, to randomly selected and independent resolvers. The availability of content is then limited by the evanescence of DNS cache entries. The EphPub protocol is transparent to existing applications, and does not rely on trusted hardware, centralized servers, or user proactive actions. We analyze its robustness and show that it incurs a negligible overhead on the DNS infrastructure. We also perform a large-scale study of the caching behavior of 900K open DNS resolvers. Finally, we propose Firefox and Thunderbird extensions that provide ephemeral publishing capabilities, as well as a command-line tool to create ephemeral files.
|
2012.13523
|
Xiaoming Chen
|
Xiaodan Shao, Xiaoming Chen, Yiyang Qiang, Caijun Zhong, Zhaoyang
Zhang
|
Feature-Aided Adaptive-Tuning Deep Learning for Massive Device Detection
|
To appear in IEEE Journal on Selected Areas in Communications
"Machine Learning in Communications and Networks"
| null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
With the increasing development of Internet of Things (IoT), the upcoming
sixth-generation (6G) wireless network is required to support grant-free random
access of a massive number of sporadic traffic devices. In particular, at the
beginning of each time slot, the base station (BS) performs joint activity
detection and channel estimation (JADCE) based on the received pilot sequences
sent from active devices. Due to the deployment of a large-scale antenna array
and the existence of a massive number of IoT devices, conventional JADCE
approaches usually have high computational complexity and need long pilot
sequences. To solve these challenges, this paper proposes a novel deep learning
framework for JADCE in 6G wireless networks, which contains a dimension
reduction module, a deep learning network module, an active device detection
module, and a channel estimation module. Then, prior-feature learning followed
by an adaptive-tuning strategy is proposed, where an inner network composed of
the Expectation-maximization (EM) and back-propagation is introduced to jointly
tune the precision and learn the distribution parameters of the device state
matrix. Finally, by designing the inner layer-by-layer and outer layer-by-layer
training method, a feature-aided adaptive-tuning deep learning network is
built. Both theoretical analysis and simulation results confirm that the
proposed deep learning framework has low computational complexity and needs
short pilot sequences in practical scenarios.
|
[
{
"created": "Fri, 25 Dec 2020 06:15:12 GMT",
"version": "v1"
}
] |
2020-12-29
|
[
[
"Shao",
"Xiaodan",
""
],
[
"Chen",
"Xiaoming",
""
],
[
"Qiang",
"Yiyang",
""
],
[
"Zhong",
"Caijun",
""
],
[
"Zhang",
"Zhaoyang",
""
]
] |
With the increasing development of Internet of Things (IoT), the upcoming sixth-generation (6G) wireless network is required to support grant-free random access of a massive number of sporadic traffic devices. In particular, at the beginning of each time slot, the base station (BS) performs joint activity detection and channel estimation (JADCE) based on the received pilot sequences sent from active devices. Due to the deployment of a large-scale antenna array and the existence of a massive number of IoT devices, conventional JADCE approaches usually have high computational complexity and need long pilot sequences. To solve these challenges, this paper proposes a novel deep learning framework for JADCE in 6G wireless networks, which contains a dimension reduction module, a deep learning network module, an active device detection module, and a channel estimation module. Then, prior-feature learning followed by an adaptive-tuning strategy is proposed, where an inner network composed of the Expectation-maximization (EM) and back-propagation is introduced to jointly tune the precision and learn the distribution parameters of the device state matrix. Finally, by designing the inner layer-by-layer and outer layer-by-layer training method, a feature-aided adaptive-tuning deep learning network is built. Both theoretical analysis and simulation results confirm that the proposed deep learning framework has low computational complexity and needs short pilot sequences in practical scenarios.
|
1906.04914
|
Suraj Tripathi
|
Abhay Kumar, Nishant Jain, Suraj Tripathi, Chirag Singh
|
From Fully Supervised to Zero Shot Settings for Twitter Hashtag
Recommendation
|
Accepted in CICLing 2019
| null | null | null |
cs.IR cs.CL cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a comprehensive end-to-end pipeline for Twitter hashtags
recommendation system including data collection, supervised training setting
and zero shot training setting. In the supervised training setting, we have
proposed and compared the performance of various deep learning architectures,
namely Convolutional Neural Network (CNN), Recurrent Neural Network (RNN) and
Transformer Network. However, it is not feasible to collect data for all
possible hashtag labels and train a classifier model on them. To overcome this
limitation, we propose a Zero Shot Learning (ZSL) paradigm for predicting
unseen hashtag labels by learning the relationship between the semantic space
of tweets and the embedding space of hashtag labels. We evaluated various
state-of-the-art ZSL methods like Convex combination of Semantic Embedding
(ConSE), Embarrassingly Simple Zero-Shot Learning (ESZSL) and Deep Embedding
Model for Zero-Shot Learning (DEM-ZSL) for the hashtag recommendation task. We
demonstrate the effectiveness and scalability of ZSL methods for the
recommendation of unseen hashtags. To the best of our knowledge, this is the
first quantitative evaluation of ZSL methods to date for unseen hashtags
recommendations from tweet text.
|
[
{
"created": "Tue, 11 Jun 2019 17:38:28 GMT",
"version": "v1"
}
] |
2019-06-13
|
[
[
"Kumar",
"Abhay",
""
],
[
"Jain",
"Nishant",
""
],
[
"Tripathi",
"Suraj",
""
],
[
"Singh",
"Chirag",
""
]
] |
We propose a comprehensive end-to-end pipeline for Twitter hashtags recommendation system including data collection, supervised training setting and zero shot training setting. In the supervised training setting, we have proposed and compared the performance of various deep learning architectures, namely Convolutional Neural Network (CNN), Recurrent Neural Network (RNN) and Transformer Network. However, it is not feasible to collect data for all possible hashtag labels and train a classifier model on them. To overcome this limitation, we propose a Zero Shot Learning (ZSL) paradigm for predicting unseen hashtag labels by learning the relationship between the semantic space of tweets and the embedding space of hashtag labels. We evaluated various state-of-the-art ZSL methods like Convex combination of Semantic Embedding (ConSE), Embarrassingly Simple Zero-Shot Learning (ESZSL) and Deep Embedding Model for Zero-Shot Learning (DEM-ZSL) for the hashtag recommendation task. We demonstrate the effectiveness and scalability of ZSL methods for the recommendation of unseen hashtags. To the best of our knowledge, this is the first quantitative evaluation of ZSL methods to date for unseen hashtags recommendations from tweet text.
|
1207.1550
|
Yuanxin Wu
|
Yuanxin Wu and Xianfei Pan
|
Velocity/Position Integration Formula (I): Application to In-flight
Coarse Alignment
|
IEEE Trans. on Aerospace and Electronic Systems, in press
|
IEEE Transactions on Aerospace and Electronic Systems, vol. 49,
no. 2, pp. 1006-1023, 2013
|
10.1109/TAES.2013.6494395
| null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The in-flight alignment is a critical stage for airborne INS/GPS
applications. The alignment task is usually carried out by the Kalman filtering
technique that necessitates a good initial attitude to obtain satisfying
performance. Due to the airborne dynamics, the in-flight alignment is much
difficult than alignment on the ground. This paper proposes an
optimization-based coarse alignment approach using GPS position/velocity as
input, founded on the newly-derived velocity/position integration formulae.
Simulation and flight test results show that, with the GPS lever arm well
handled, it is potentially able to yield the initial heading up to one degree
accuracy in ten seconds. It can serve as a nice coarse in-flight alignment
without any prior attitude information for the subsequent fine Kalman
alignment. The approach can also be applied to other applications that require
aligning the INS on the run.
|
[
{
"created": "Fri, 6 Jul 2012 08:04:25 GMT",
"version": "v1"
}
] |
2013-04-10
|
[
[
"Wu",
"Yuanxin",
""
],
[
"Pan",
"Xianfei",
""
]
] |
The in-flight alignment is a critical stage for airborne INS/GPS applications. The alignment task is usually carried out by the Kalman filtering technique that necessitates a good initial attitude to obtain satisfying performance. Due to the airborne dynamics, the in-flight alignment is much difficult than alignment on the ground. This paper proposes an optimization-based coarse alignment approach using GPS position/velocity as input, founded on the newly-derived velocity/position integration formulae. Simulation and flight test results show that, with the GPS lever arm well handled, it is potentially able to yield the initial heading up to one degree accuracy in ten seconds. It can serve as a nice coarse in-flight alignment without any prior attitude information for the subsequent fine Kalman alignment. The approach can also be applied to other applications that require aligning the INS on the run.
|
1707.03217
|
Gregor Wiedemann
|
Gregor Wiedemann, Andreas Niekler
|
Document Retrieval for Large Scale Content Analysis using Contextualized
Dictionaries
|
https://hal.archives-ouvertes.fr/hal-01005879; Proceedings of
Terminology and Knowledge Engineering 2014 (TKE'14), Berlin
| null | null | null |
cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents a procedure to retrieve subsets of relevant documents
from large text collections for Content Analysis, e.g. in social sciences.
Document retrieval for this purpose needs to take account of the fact that
analysts often cannot describe their research objective with a small set of key
terms, especially when dealing with theoretical or rather abstract research
interests. Instead, it is much easier to define a set of paradigmatic documents
which reflect topics of interest as well as targeted manner of speech. Thus, in
contrast to classic information retrieval tasks we employ manually compiled
collections of reference documents to compose large queries of several hundred
key terms, called dictionaries. We extract dictionaries via Topic Models and
also use co-occurrence data from reference collections. Evaluations show that
the procedure improves retrieval results for this purpose compared to
alternative methods of key term extraction as well as neglecting co-occurrence
data.
|
[
{
"created": "Tue, 11 Jul 2017 11:00:44 GMT",
"version": "v1"
}
] |
2017-07-12
|
[
[
"Wiedemann",
"Gregor",
""
],
[
"Niekler",
"Andreas",
""
]
] |
This paper presents a procedure to retrieve subsets of relevant documents from large text collections for Content Analysis, e.g. in social sciences. Document retrieval for this purpose needs to take account of the fact that analysts often cannot describe their research objective with a small set of key terms, especially when dealing with theoretical or rather abstract research interests. Instead, it is much easier to define a set of paradigmatic documents which reflect topics of interest as well as targeted manner of speech. Thus, in contrast to classic information retrieval tasks we employ manually compiled collections of reference documents to compose large queries of several hundred key terms, called dictionaries. We extract dictionaries via Topic Models and also use co-occurrence data from reference collections. Evaluations show that the procedure improves retrieval results for this purpose compared to alternative methods of key term extraction as well as neglecting co-occurrence data.
|
2102.05749
|
Ond\v{r}ej C\'ifka
|
Ond\v{r}ej C\'ifka, Alexey Ozerov, Umut \c{S}im\c{s}ekli, Ga\"el
Richard
|
Self-Supervised VQ-VAE for One-Shot Music Style Transfer
|
ICASSP 2021. Website: https://adasp.telecom-paris.fr/s/ss-vq-vae
|
ICASSP 2021 - 2021 IEEE International Conference on Acoustics,
Speech and Signal Processing (2021) 96-100
|
10.1109/ICASSP39728.2021.9414235
| null |
cs.SD cs.LG eess.AS stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Neural style transfer, allowing to apply the artistic style of one image to
another, has become one of the most widely showcased computer vision
applications shortly after its introduction. In contrast, related tasks in the
music audio domain remained, until recently, largely untackled. While several
style conversion methods tailored to musical signals have been proposed, most
lack the 'one-shot' capability of classical image style transfer algorithms. On
the other hand, the results of existing one-shot audio style transfer methods
on musical inputs are not as compelling. In this work, we are specifically
interested in the problem of one-shot timbre transfer. We present a novel
method for this task, based on an extension of the vector-quantized variational
autoencoder (VQ-VAE), along with a simple self-supervised learning strategy
designed to obtain disentangled representations of timbre and pitch. We
evaluate the method using a set of objective metrics and show that it is able
to outperform selected baselines.
|
[
{
"created": "Wed, 10 Feb 2021 21:42:49 GMT",
"version": "v1"
},
{
"created": "Thu, 10 Jun 2021 15:15:22 GMT",
"version": "v2"
}
] |
2021-06-11
|
[
[
"Cífka",
"Ondřej",
""
],
[
"Ozerov",
"Alexey",
""
],
[
"Şimşekli",
"Umut",
""
],
[
"Richard",
"Gaël",
""
]
] |
Neural style transfer, allowing to apply the artistic style of one image to another, has become one of the most widely showcased computer vision applications shortly after its introduction. In contrast, related tasks in the music audio domain remained, until recently, largely untackled. While several style conversion methods tailored to musical signals have been proposed, most lack the 'one-shot' capability of classical image style transfer algorithms. On the other hand, the results of existing one-shot audio style transfer methods on musical inputs are not as compelling. In this work, we are specifically interested in the problem of one-shot timbre transfer. We present a novel method for this task, based on an extension of the vector-quantized variational autoencoder (VQ-VAE), along with a simple self-supervised learning strategy designed to obtain disentangled representations of timbre and pitch. We evaluate the method using a set of objective metrics and show that it is able to outperform selected baselines.
|
2402.05764
|
Basile Simon
|
Matt Shearer, Basile Simon, Cl\'ement Geiger
|
Datastringer: easy dataset monitoring for journalists
| null | null | null | null |
cs.CY
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
We created a software enabling journalists to define a set of criteria they
would like to see applied regularly to a constantly-updated dataset, sending
them an alert when these criteria are met, thus signaling them that there may
be a story to write. The main challenges were to keep the product scalable and
powerful, while making sure that it could be used by journalists who would not
possess all the technical knowledge to exploit it fully. In order to do so, we
had to choose Javascript as our main language, as well as designing the code in
such a way that it would allow re-usability and further improvements. This
project is a proof of concept being tested in a real-life environment, and will
be developed towards more and more accessibility.
|
[
{
"created": "Thu, 8 Feb 2024 15:49:58 GMT",
"version": "v1"
}
] |
2024-02-09
|
[
[
"Shearer",
"Matt",
""
],
[
"Simon",
"Basile",
""
],
[
"Geiger",
"Clément",
""
]
] |
We created a software enabling journalists to define a set of criteria they would like to see applied regularly to a constantly-updated dataset, sending them an alert when these criteria are met, thus signaling them that there may be a story to write. The main challenges were to keep the product scalable and powerful, while making sure that it could be used by journalists who would not possess all the technical knowledge to exploit it fully. In order to do so, we had to choose Javascript as our main language, as well as designing the code in such a way that it would allow re-usability and further improvements. This project is a proof of concept being tested in a real-life environment, and will be developed towards more and more accessibility.
|
1504.02264
|
Wim Vanderbauwhede
|
Wim Vanderbauwhede
|
Model Coupling between the Weather Research and Forecasting Model and
the DPRI Large Eddy Simulator for Urban Flows on GPU-accelerated Multicore
Systems
|
This work was conducted during a research visit at the Disaster
Prevention Research Institute of Kyoto University, supported by an EPSRC
Overseas Travel Grant, EP/L026201/1
| null | null | null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this report we present a novel approach to model coupling for
shared-memory multicore systems hosting OpenCL-compliant accelerators, which we
call The Glasgow Model Coupling Framework (GMCF). We discuss the implementation
of a prototype of GMCF and its application to coupling the Weather Research and
Forecasting Model and an OpenCL-accelerated version of the Large Eddy Simulator
for Urban Flows (LES) developed at DPRI.
The first stage of this work concerned the OpenCL port of the LES. The
methodology used for the OpenCL port is a combination of automated analysis and
code generation and rule-based manual parallelization. For the evaluation, the
non-OpenCL LES code was compiled using gfortran, fort and pgfortran}, in each
case with auto-parallelization and auto-vectorization. The OpenCL-accelerated
version of the LES achieves a 7 times speed-up on a NVIDIA GeForce GTX 480
GPGPU, compared to the fastest possible compilation of the original code
running on a 12-core Intel Xeon E5-2640.
In the second stage of this work, we built the Glasgow Model Coupling
Framework and successfully used it to couple an OpenMP-parallelized WRF
instance with an OpenCL LES instance which runs the LES code on the GPGPI. The
system requires only very minimal changes to the original code. The report
discusses the rationale, aims, approach and implementation details of this
work.
|
[
{
"created": "Thu, 9 Apr 2015 11:22:46 GMT",
"version": "v1"
}
] |
2015-04-10
|
[
[
"Vanderbauwhede",
"Wim",
""
]
] |
In this report we present a novel approach to model coupling for shared-memory multicore systems hosting OpenCL-compliant accelerators, which we call The Glasgow Model Coupling Framework (GMCF). We discuss the implementation of a prototype of GMCF and its application to coupling the Weather Research and Forecasting Model and an OpenCL-accelerated version of the Large Eddy Simulator for Urban Flows (LES) developed at DPRI. The first stage of this work concerned the OpenCL port of the LES. The methodology used for the OpenCL port is a combination of automated analysis and code generation and rule-based manual parallelization. For the evaluation, the non-OpenCL LES code was compiled using gfortran, fort and pgfortran}, in each case with auto-parallelization and auto-vectorization. The OpenCL-accelerated version of the LES achieves a 7 times speed-up on a NVIDIA GeForce GTX 480 GPGPU, compared to the fastest possible compilation of the original code running on a 12-core Intel Xeon E5-2640. In the second stage of this work, we built the Glasgow Model Coupling Framework and successfully used it to couple an OpenMP-parallelized WRF instance with an OpenCL LES instance which runs the LES code on the GPGPI. The system requires only very minimal changes to the original code. The report discusses the rationale, aims, approach and implementation details of this work.
|
1408.6290
|
Ichiroh Kanaya Dr.
|
Ichiroh Kanaya, Mayuko Kanazawa, Masataka Imura
|
Function + Action = Interaction
| null | null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by-nc-sa/3.0/
|
This article presents the mathematical background of general interactive
systems. The first principle of designing a large system is to _divide and
conquer_, which implies that we could possibly reduce human error if we divided
a large system in smaller subsystems. Interactive systems are, however, often
composed of many subsystems that are _organically_ connected to one another and
thus difficult to divide. In other words, we cannot apply a framework of set
theory to the programming of interactive systems. We can overcome this
difficulty by applying a framework of category theory (Kleisli category) to the
programming, but this requires highly abstract mathematics, which is not very
popular. In this article we introduce the fundamental idea of category theory
using only lambda calculus, and then demonstrate how it can be used in the
practical design of an interactive system. Finally, we mention how this
discussion relates to category theory.
|
[
{
"created": "Wed, 27 Aug 2014 01:23:44 GMT",
"version": "v1"
}
] |
2014-08-28
|
[
[
"Kanaya",
"Ichiroh",
""
],
[
"Kanazawa",
"Mayuko",
""
],
[
"Imura",
"Masataka",
""
]
] |
This article presents the mathematical background of general interactive systems. The first principle of designing a large system is to _divide and conquer_, which implies that we could possibly reduce human error if we divided a large system in smaller subsystems. Interactive systems are, however, often composed of many subsystems that are _organically_ connected to one another and thus difficult to divide. In other words, we cannot apply a framework of set theory to the programming of interactive systems. We can overcome this difficulty by applying a framework of category theory (Kleisli category) to the programming, but this requires highly abstract mathematics, which is not very popular. In this article we introduce the fundamental idea of category theory using only lambda calculus, and then demonstrate how it can be used in the practical design of an interactive system. Finally, we mention how this discussion relates to category theory.
|
2403.01852
|
Zhengyao Lv
|
Zhengyao Lv and Yuxiang Wei and Wangmeng Zuo and Kwan-Yee K. Wong
|
PLACE: Adaptive Layout-Semantic Fusion for Semantic Image Synthesis
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent advancements in large-scale pre-trained text-to-image models have led
to remarkable progress in semantic image synthesis. Nevertheless, synthesizing
high-quality images with consistent semantics and layout remains a challenge.
In this paper, we propose the adaPtive LAyout-semantiC fusion modulE (PLACE)
that harnesses pre-trained models to alleviate the aforementioned issues.
Specifically, we first employ the layout control map to faithfully represent
layouts in the feature space. Subsequently, we combine the layout and semantic
features in a timestep-adaptive manner to synthesize images with realistic
details. During fine-tuning, we propose the Semantic Alignment (SA) loss to
further enhance layout alignment. Additionally, we introduce the Layout-Free
Prior Preservation (LFP) loss, which leverages unlabeled data to maintain the
priors of pre-trained models, thereby improving the visual quality and semantic
consistency of synthesized images. Extensive experiments demonstrate that our
approach performs favorably in terms of visual quality, semantic consistency,
and layout alignment. The source code and model are available at
https://github.com/cszy98/PLACE/tree/main.
|
[
{
"created": "Mon, 4 Mar 2024 09:03:16 GMT",
"version": "v1"
}
] |
2024-03-05
|
[
[
"Lv",
"Zhengyao",
""
],
[
"Wei",
"Yuxiang",
""
],
[
"Zuo",
"Wangmeng",
""
],
[
"Wong",
"Kwan-Yee K.",
""
]
] |
Recent advancements in large-scale pre-trained text-to-image models have led to remarkable progress in semantic image synthesis. Nevertheless, synthesizing high-quality images with consistent semantics and layout remains a challenge. In this paper, we propose the adaPtive LAyout-semantiC fusion modulE (PLACE) that harnesses pre-trained models to alleviate the aforementioned issues. Specifically, we first employ the layout control map to faithfully represent layouts in the feature space. Subsequently, we combine the layout and semantic features in a timestep-adaptive manner to synthesize images with realistic details. During fine-tuning, we propose the Semantic Alignment (SA) loss to further enhance layout alignment. Additionally, we introduce the Layout-Free Prior Preservation (LFP) loss, which leverages unlabeled data to maintain the priors of pre-trained models, thereby improving the visual quality and semantic consistency of synthesized images. Extensive experiments demonstrate that our approach performs favorably in terms of visual quality, semantic consistency, and layout alignment. The source code and model are available at https://github.com/cszy98/PLACE/tree/main.
|
1608.04999
|
James Cheney
|
Weili Fu, Roly Perera, Paul Anderson, and James Cheney
|
$\mu$Puppet: A Declarative Subset of the Puppet Configuration Language
|
Full version of ECOOP 2017 conference paper
| null |
10.4230/LIPIcs.ECOOP.2017.12
| null |
cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Puppet is a popular declarative framework for specifying and managing complex
system configurations. The Puppet framework includes a domain-specific language
with several advanced features inspired by object-oriented programming,
including user-defined resource types, 'classes' with a form of inheritance,
and dependency management. Like most real-world languages, the language has
evolved in an ad hoc fashion, resulting in a design with numerous features,
some of which are complex, hard to understand, and difficult to use correctly.
We present an operational semantics for $\mu$Puppet, a representative subset
of the Puppet language that covers the distinctive features of Puppet, while
excluding features that are either deprecated or work-in-progress. Formalising
the semantics sheds light on difficult parts of the language, identifies
opportunities for future improvements, and provides a foundation for future
analysis or debugging techniques, such as static typechecking or provenance
tracking. Our semantics leads straightforwardly to a reference implementation
in Haskell. We also discuss some of Puppet's idiosyncrasies, particularly its
handling of classes and scope, and present an initial corpus of test cases
supported by our formal semantics.
|
[
{
"created": "Wed, 17 Aug 2016 15:26:48 GMT",
"version": "v1"
},
{
"created": "Sun, 21 Aug 2016 20:17:32 GMT",
"version": "v2"
},
{
"created": "Thu, 26 Jan 2017 18:06:06 GMT",
"version": "v3"
},
{
"created": "Fri, 26 May 2017 10:13:55 GMT",
"version": "v4"
}
] |
2017-09-12
|
[
[
"Fu",
"Weili",
""
],
[
"Perera",
"Roly",
""
],
[
"Anderson",
"Paul",
""
],
[
"Cheney",
"James",
""
]
] |
Puppet is a popular declarative framework for specifying and managing complex system configurations. The Puppet framework includes a domain-specific language with several advanced features inspired by object-oriented programming, including user-defined resource types, 'classes' with a form of inheritance, and dependency management. Like most real-world languages, the language has evolved in an ad hoc fashion, resulting in a design with numerous features, some of which are complex, hard to understand, and difficult to use correctly. We present an operational semantics for $\mu$Puppet, a representative subset of the Puppet language that covers the distinctive features of Puppet, while excluding features that are either deprecated or work-in-progress. Formalising the semantics sheds light on difficult parts of the language, identifies opportunities for future improvements, and provides a foundation for future analysis or debugging techniques, such as static typechecking or provenance tracking. Our semantics leads straightforwardly to a reference implementation in Haskell. We also discuss some of Puppet's idiosyncrasies, particularly its handling of classes and scope, and present an initial corpus of test cases supported by our formal semantics.
|
cs/0406047
|
Alexei Stadnik
|
G.A. Ososkov, S.G. Dmitrievskiy, A.V. Stadnik
|
Self-organizing neural networks in classification and image recognition
| null | null | null | null |
cs.CV cs.AI
| null |
Self-organizing neural networks are used for brick finding in OPERA
experiment. Self-organizing neural networks and wavelet analysis used for
recognition and extraction of car numbers from images.
|
[
{
"created": "Thu, 24 Jun 2004 13:14:58 GMT",
"version": "v1"
}
] |
2007-05-23
|
[
[
"Ososkov",
"G. A.",
""
],
[
"Dmitrievskiy",
"S. G.",
""
],
[
"Stadnik",
"A. V.",
""
]
] |
Self-organizing neural networks are used for brick finding in OPERA experiment. Self-organizing neural networks and wavelet analysis used for recognition and extraction of car numbers from images.
|
1310.0068
|
Rosemary Renaut
|
Saeed Vatankhah, Vahid E Ardestani and Rosemary A Renaut
|
Automatic estimation of the regularization parameter in 2-D focusing
gravity inversion: an application to the Safo manganese mine in northwest of
Iran
| null |
J. Geophys. Eng. 11 (2014) 045001
|
10.1088/1742-2132/11/4/045001
|
https://academic.oup.com/jge/article/11/4/045001/5113347
|
cs.CE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We investigate the use of Tikhonov regularization with the minimum support
stabilizer for underdetermined 2-D inversion of gravity data. This stabilizer
produces models with non-smooth properties which is useful for identifying
geologic structures with sharp boundaries. A very important aspect of using
Tikhonov regularization is the choice of the regularization parameter that
controls the trade off between the data fidelity and the stabilizing
functional. The L-curve and generalized cross validation techniques, which only
require the relative sizes of the uncertainties in the observations are
considered. Both criteria are applied in an iterative process for which at each
iteration a value for regularization parameter is estimated. Suitable values
for the regularization parameter are successfully determined in both cases for
synthetic but practically relevant examples. Whenever the geologic situation
permits, it is easier and more efficient to model the subsurface with a 2-D
algorithm, rather than to apply a full 3-D approach. Then, because the problem
is not large it is appropriate to use the generalized singular value
decomposition for solving the problem efficiently. The method is applied on a
profile of gravity data acquired over the Safo mining camp in Maku-Iran, which
is well known for manganese ores. The presented results demonstrate success in
reconstructing the geometry and density distribution of the subsurface source.
|
[
{
"created": "Mon, 30 Sep 2013 21:43:25 GMT",
"version": "v1"
},
{
"created": "Fri, 10 Jan 2014 00:49:06 GMT",
"version": "v2"
}
] |
2022-08-16
|
[
[
"Vatankhah",
"Saeed",
""
],
[
"Ardestani",
"Vahid E",
""
],
[
"Renaut",
"Rosemary A",
""
]
] |
We investigate the use of Tikhonov regularization with the minimum support stabilizer for underdetermined 2-D inversion of gravity data. This stabilizer produces models with non-smooth properties which is useful for identifying geologic structures with sharp boundaries. A very important aspect of using Tikhonov regularization is the choice of the regularization parameter that controls the trade off between the data fidelity and the stabilizing functional. The L-curve and generalized cross validation techniques, which only require the relative sizes of the uncertainties in the observations are considered. Both criteria are applied in an iterative process for which at each iteration a value for regularization parameter is estimated. Suitable values for the regularization parameter are successfully determined in both cases for synthetic but practically relevant examples. Whenever the geologic situation permits, it is easier and more efficient to model the subsurface with a 2-D algorithm, rather than to apply a full 3-D approach. Then, because the problem is not large it is appropriate to use the generalized singular value decomposition for solving the problem efficiently. The method is applied on a profile of gravity data acquired over the Safo mining camp in Maku-Iran, which is well known for manganese ores. The presented results demonstrate success in reconstructing the geometry and density distribution of the subsurface source.
|
2404.04270
|
Yassaman Ebrahimzadeh Maboud
|
Yassaman Ebrahimzadeh Maboud, Muhammad Adnan, Divya Mahajan, Prashant
J. Nair
|
Accelerating Recommender Model Training by Dynamically Skipping Stale
Embeddings
| null | null | null | null |
cs.IR cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Training recommendation models pose significant challenges regarding resource
utilization and performance. Prior research has proposed an approach that
categorizes embeddings into popular and non-popular classes to reduce the
training time for recommendation models. We observe that, even among the
popular embeddings, certain embeddings undergo rapid training and exhibit
minimal subsequent variation, resulting in saturation. Consequently, updates to
these embeddings lack any contribution to model quality. This paper presents
Slipstream, a software framework that identifies stale embeddings on the fly
and skips their updates to enhance performance. This capability enables
Slipstream to achieve substantial speedup, optimize CPU-GPU bandwidth usage,
and eliminate unnecessary memory access. SlipStream showcases training time
reductions of 2x, 2.4x, 1.2x, and 1.175x across real-world datasets and
configurations, compared to Baseline XDL, Intel-optimized DRLM, FAE, and
Hotline, respectively.
|
[
{
"created": "Fri, 22 Mar 2024 00:29:06 GMT",
"version": "v1"
}
] |
2024-04-09
|
[
[
"Maboud",
"Yassaman Ebrahimzadeh",
""
],
[
"Adnan",
"Muhammad",
""
],
[
"Mahajan",
"Divya",
""
],
[
"Nair",
"Prashant J.",
""
]
] |
Training recommendation models pose significant challenges regarding resource utilization and performance. Prior research has proposed an approach that categorizes embeddings into popular and non-popular classes to reduce the training time for recommendation models. We observe that, even among the popular embeddings, certain embeddings undergo rapid training and exhibit minimal subsequent variation, resulting in saturation. Consequently, updates to these embeddings lack any contribution to model quality. This paper presents Slipstream, a software framework that identifies stale embeddings on the fly and skips their updates to enhance performance. This capability enables Slipstream to achieve substantial speedup, optimize CPU-GPU bandwidth usage, and eliminate unnecessary memory access. SlipStream showcases training time reductions of 2x, 2.4x, 1.2x, and 1.175x across real-world datasets and configurations, compared to Baseline XDL, Intel-optimized DRLM, FAE, and Hotline, respectively.
|
2312.12253
|
Demircan Tas
|
Demircan Tas, Rohit Priyadarshi Sanatani
|
Geo-located Aspect Based Sentiment Analysis (ABSA) for Crowdsourced
Evaluation of Urban Environments
|
Created for 6.8610, Quantitative Methods for Natural Language
Processing at MIT Fall 2022. 5 pages, 4 figures
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Sentiment analysis methods are rapidly being adopted by the field of Urban
Design and Planning, for the crowdsourced evaluation of urban environments.
However, most models used within this domain are able to identify positive or
negative sentiment associated with a textual appraisal as a whole, without
inferring information about specific urban aspects contained within it, or the
sentiment associated with them. While Aspect Based Sentiment Analysis (ABSA) is
becoming increasingly popular, most existing ABSA models are trained on
non-urban themes such as restaurants, electronics, consumer goods and the like.
This body of research develops an ABSA model capable of extracting urban
aspects contained within geo-located textual urban appraisals, along with
corresponding aspect sentiment classification. We annotate a dataset of 2500
crowdsourced reviews of public parks, and train a Bidirectional Encoder
Representations from Transformers (BERT) model with Local Context Focus (LCF)
on this data. Our model achieves significant improvement in prediction accuracy
on urban reviews, for both Aspect Term Extraction (ATE) and Aspect Sentiment
Classification (ASC) tasks. For demonstrative analysis, positive and negative
urban aspects across Boston are spatially visualized. We hope that this model
is useful for designers and planners for fine-grained urban sentiment
evaluation.
|
[
{
"created": "Tue, 19 Dec 2023 15:37:27 GMT",
"version": "v1"
}
] |
2023-12-20
|
[
[
"Tas",
"Demircan",
""
],
[
"Sanatani",
"Rohit Priyadarshi",
""
]
] |
Sentiment analysis methods are rapidly being adopted by the field of Urban Design and Planning, for the crowdsourced evaluation of urban environments. However, most models used within this domain are able to identify positive or negative sentiment associated with a textual appraisal as a whole, without inferring information about specific urban aspects contained within it, or the sentiment associated with them. While Aspect Based Sentiment Analysis (ABSA) is becoming increasingly popular, most existing ABSA models are trained on non-urban themes such as restaurants, electronics, consumer goods and the like. This body of research develops an ABSA model capable of extracting urban aspects contained within geo-located textual urban appraisals, along with corresponding aspect sentiment classification. We annotate a dataset of 2500 crowdsourced reviews of public parks, and train a Bidirectional Encoder Representations from Transformers (BERT) model with Local Context Focus (LCF) on this data. Our model achieves significant improvement in prediction accuracy on urban reviews, for both Aspect Term Extraction (ATE) and Aspect Sentiment Classification (ASC) tasks. For demonstrative analysis, positive and negative urban aspects across Boston are spatially visualized. We hope that this model is useful for designers and planners for fine-grained urban sentiment evaluation.
|
2202.07178
|
Yuanxiong Guo
|
Rui Hu, Yanmin Gong and Yuanxiong Guo
|
Federated Learning with Sparsified Model Perturbation: Improving
Accuracy under Client-Level Differential Privacy
| null | null | null | null |
cs.LG cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Federated learning (FL) that enables edge devices to collaboratively learn a
shared model while keeping their training data locally has received great
attention recently and can protect privacy in comparison with the traditional
centralized learning paradigm. However, sensitive information about the
training data can still be inferred from model parameters shared in FL.
Differential privacy (DP) is the state-of-the-art technique to defend against
those attacks. The key challenge to achieving DP in FL lies in the adverse
impact of DP noise on model accuracy, particularly for deep learning models
with large numbers of parameters. This paper develops a novel
differentially-private FL scheme named Fed-SMP that provides a client-level DP
guarantee while maintaining high model accuracy. To mitigate the impact of
privacy protection on model accuracy, Fed-SMP leverages a new technique called
Sparsified Model Perturbation (SMP) where local models are sparsified first
before being perturbed by Gaussian noise. We provide a tight end-to-end privacy
analysis for Fed-SMP using Renyi DP and prove the convergence of Fed-SMP with
both unbiased and biased sparsifications. Extensive experiments on real-world
datasets are conducted to demonstrate the effectiveness of Fed-SMP in improving
model accuracy with the same DP guarantee and saving communication cost
simultaneously.
|
[
{
"created": "Tue, 15 Feb 2022 04:05:42 GMT",
"version": "v1"
},
{
"created": "Tue, 15 Nov 2022 20:15:26 GMT",
"version": "v2"
}
] |
2022-11-17
|
[
[
"Hu",
"Rui",
""
],
[
"Gong",
"Yanmin",
""
],
[
"Guo",
"Yuanxiong",
""
]
] |
Federated learning (FL) that enables edge devices to collaboratively learn a shared model while keeping their training data locally has received great attention recently and can protect privacy in comparison with the traditional centralized learning paradigm. However, sensitive information about the training data can still be inferred from model parameters shared in FL. Differential privacy (DP) is the state-of-the-art technique to defend against those attacks. The key challenge to achieving DP in FL lies in the adverse impact of DP noise on model accuracy, particularly for deep learning models with large numbers of parameters. This paper develops a novel differentially-private FL scheme named Fed-SMP that provides a client-level DP guarantee while maintaining high model accuracy. To mitigate the impact of privacy protection on model accuracy, Fed-SMP leverages a new technique called Sparsified Model Perturbation (SMP) where local models are sparsified first before being perturbed by Gaussian noise. We provide a tight end-to-end privacy analysis for Fed-SMP using Renyi DP and prove the convergence of Fed-SMP with both unbiased and biased sparsifications. Extensive experiments on real-world datasets are conducted to demonstrate the effectiveness of Fed-SMP in improving model accuracy with the same DP guarantee and saving communication cost simultaneously.
|
2407.20731
|
Yi Ju
|
Yi Ju, Mingshuai Li, Adalberto Perez, Laura Bellentani, Niclas
Jansson, Stefano Markidis, Philipp Schlatter, Erwin Laure
|
In-Situ Techniques on GPU-Accelerated Data-Intensive Applications
| null | null |
10.1109/e-science58273.2023.10254865
| null |
cs.PF cs.CE
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The computational power of High-Performance Computing (HPC) systems is
constantly increasing, however, their input/output (IO) performance grows
relatively slowly, and their storage capacity is also limited. This unbalance
presents significant challenges for applications such as Molecular Dynamics
(MD) and Computational Fluid Dynamics (CFD), which generate massive amounts of
data for further visualization or analysis. At the same time, checkpointing is
crucial for long runs on HPC clusters, due to limited walltimes and/or failures
of system components, and typically requires the storage of large amount of
data. Thus, restricted IO performance and storage capacity can lead to
bottlenecks for the performance of full application workflows (as compared to
computational kernels without IO). In-situ techniques, where data is further
processed while still in memory rather to write it out over the I/O subsystem,
can help to tackle these problems. In contrast to traditional post-processing
methods, in-situ techniques can reduce or avoid the need to write or read data
via the IO subsystem. They offer a promising approach for applications aiming
to leverage the full power of large scale HPC systems. In-situ techniques can
also be applied to hybrid computational nodes on HPC systems consisting of
graphics processing units (GPUs) and central processing units (CPUs). On one
node, the GPUs would have significant performance advantages over the CPUs.
Therefore, current approaches for GPU-accelerated applications often focus on
maximizing GPU usage, leaving CPUs underutilized. In-situ tasks using CPUs to
perform data analysis or preprocess data concurrently to the running
simulation, offer a possibility to improve this underutilization.
|
[
{
"created": "Tue, 30 Jul 2024 11:03:00 GMT",
"version": "v1"
}
] |
2024-07-31
|
[
[
"Ju",
"Yi",
""
],
[
"Li",
"Mingshuai",
""
],
[
"Perez",
"Adalberto",
""
],
[
"Bellentani",
"Laura",
""
],
[
"Jansson",
"Niclas",
""
],
[
"Markidis",
"Stefano",
""
],
[
"Schlatter",
"Philipp",
""
],
[
"Laure",
"Erwin",
""
]
] |
The computational power of High-Performance Computing (HPC) systems is constantly increasing, however, their input/output (IO) performance grows relatively slowly, and their storage capacity is also limited. This unbalance presents significant challenges for applications such as Molecular Dynamics (MD) and Computational Fluid Dynamics (CFD), which generate massive amounts of data for further visualization or analysis. At the same time, checkpointing is crucial for long runs on HPC clusters, due to limited walltimes and/or failures of system components, and typically requires the storage of large amount of data. Thus, restricted IO performance and storage capacity can lead to bottlenecks for the performance of full application workflows (as compared to computational kernels without IO). In-situ techniques, where data is further processed while still in memory rather to write it out over the I/O subsystem, can help to tackle these problems. In contrast to traditional post-processing methods, in-situ techniques can reduce or avoid the need to write or read data via the IO subsystem. They offer a promising approach for applications aiming to leverage the full power of large scale HPC systems. In-situ techniques can also be applied to hybrid computational nodes on HPC systems consisting of graphics processing units (GPUs) and central processing units (CPUs). On one node, the GPUs would have significant performance advantages over the CPUs. Therefore, current approaches for GPU-accelerated applications often focus on maximizing GPU usage, leaving CPUs underutilized. In-situ tasks using CPUs to perform data analysis or preprocess data concurrently to the running simulation, offer a possibility to improve this underutilization.
|
2006.09264
|
Rob Geada
|
Rob Geada, Dennis Prangle, Andrew Stephen McGough
|
Bonsai-Net: One-Shot Neural Architecture Search via Differentiable
Pruners
|
Accepted to CVPR-NAS 2020.
https://github.com/RobGeada/bonsai-net-lite
| null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
One-shot Neural Architecture Search (NAS) aims to minimize the computational
expense of discovering state-of-the-art models. However, in the past year
attention has been drawn to the comparable performance of naive random search
across the same search spaces used by leading NAS algorithms. To address this,
we explore the effects of drastically relaxing the NAS search space, and we
present Bonsai-Net, an efficient one-shot NAS method to explore our relaxed
search space. Bonsai-Net is built around a modified differential pruner and can
consistently discover state-of-the-art architectures that are significantly
better than random search with fewer parameters than other state-of-the-art
methods. Additionally, Bonsai-Net performs simultaneous model search and
training, dramatically reducing the total time it takes to generate
fully-trained models from scratch.
|
[
{
"created": "Fri, 12 Jun 2020 14:44:00 GMT",
"version": "v1"
},
{
"created": "Sun, 9 May 2021 18:03:44 GMT",
"version": "v2"
},
{
"created": "Fri, 4 Jun 2021 15:40:29 GMT",
"version": "v3"
}
] |
2021-06-07
|
[
[
"Geada",
"Rob",
""
],
[
"Prangle",
"Dennis",
""
],
[
"McGough",
"Andrew Stephen",
""
]
] |
One-shot Neural Architecture Search (NAS) aims to minimize the computational expense of discovering state-of-the-art models. However, in the past year attention has been drawn to the comparable performance of naive random search across the same search spaces used by leading NAS algorithms. To address this, we explore the effects of drastically relaxing the NAS search space, and we present Bonsai-Net, an efficient one-shot NAS method to explore our relaxed search space. Bonsai-Net is built around a modified differential pruner and can consistently discover state-of-the-art architectures that are significantly better than random search with fewer parameters than other state-of-the-art methods. Additionally, Bonsai-Net performs simultaneous model search and training, dramatically reducing the total time it takes to generate fully-trained models from scratch.
|
1908.10159
|
Philip Bille
|
Philip Bille and Inge Li G{\o}rtz and Frederik Rye Skjoldjensen
|
Partial Sums on the Ultra-Wide Word RAM
|
Extended abstract appeared at TAMC 2020
| null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We consider the classic partial sums problem on the ultra-wide word RAM model
of computation. This model extends the classic $w$-bit word RAM model with
special ultrawords of length $w^2$ bits that support standard arithmetic and
boolean operation and scattered memory access operations that can access $w$
(non-contiguous) locations in memory. The ultra-wide word RAM model captures
(and idealizes) modern vector processor architectures.
Our main result is a new in-place data structure for the partial sum problem
that only stores a constant number of ultraword in addition to the input and
supports operations in doubly logarithmic time. This matches the best known
time bounds for the problem (among polynomial space data structures) while
improving the space from superlinear to a constant number of ultrawords. Our
results are based on a simple and elegant in-place word RAM data structure,
known as the Fenwick tree. Our main technical contribution is a new efficient
parallel ultra-wide word RAM implementation of the Fenwick tree, which is
likely of independent interest.
|
[
{
"created": "Tue, 27 Aug 2019 12:33:32 GMT",
"version": "v1"
},
{
"created": "Wed, 30 Sep 2020 12:09:37 GMT",
"version": "v2"
}
] |
2020-10-01
|
[
[
"Bille",
"Philip",
""
],
[
"Gørtz",
"Inge Li",
""
],
[
"Skjoldjensen",
"Frederik Rye",
""
]
] |
We consider the classic partial sums problem on the ultra-wide word RAM model of computation. This model extends the classic $w$-bit word RAM model with special ultrawords of length $w^2$ bits that support standard arithmetic and boolean operation and scattered memory access operations that can access $w$ (non-contiguous) locations in memory. The ultra-wide word RAM model captures (and idealizes) modern vector processor architectures. Our main result is a new in-place data structure for the partial sum problem that only stores a constant number of ultraword in addition to the input and supports operations in doubly logarithmic time. This matches the best known time bounds for the problem (among polynomial space data structures) while improving the space from superlinear to a constant number of ultrawords. Our results are based on a simple and elegant in-place word RAM data structure, known as the Fenwick tree. Our main technical contribution is a new efficient parallel ultra-wide word RAM implementation of the Fenwick tree, which is likely of independent interest.
|
2406.14882
|
Issey Sukeda
|
Issey Sukeda, Risa Kishikawa, Satoshi Kodera
|
70B-parameter large language models in Japanese medical
question-answering
|
7 pages, 2 figures, 4 Tables
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Since the rise of large language models (LLMs), the domain adaptation has
been one of the hot topics in various domains. Many medical LLMs trained with
English medical dataset have made public recently. However, Japanese LLMs in
medical domain still lack its research. Here we utilize multiple 70B-parameter
LLMs for the first time and show that instruction tuning using Japanese medical
question-answering dataset significantly improves the ability of Japanese LLMs
to solve Japanese medical license exams, surpassing 50\% in accuracy. In
particular, the Japanese-centric models exhibit a more significant leap in
improvement through instruction tuning compared to their English-centric
counterparts. This underscores the importance of continual pretraining and the
adjustment of the tokenizer in our local language. We also examine two slightly
different prompt formats, resulting in non-negligible performance improvement.
|
[
{
"created": "Fri, 21 Jun 2024 06:04:10 GMT",
"version": "v1"
}
] |
2024-06-24
|
[
[
"Sukeda",
"Issey",
""
],
[
"Kishikawa",
"Risa",
""
],
[
"Kodera",
"Satoshi",
""
]
] |
Since the rise of large language models (LLMs), the domain adaptation has been one of the hot topics in various domains. Many medical LLMs trained with English medical dataset have made public recently. However, Japanese LLMs in medical domain still lack its research. Here we utilize multiple 70B-parameter LLMs for the first time and show that instruction tuning using Japanese medical question-answering dataset significantly improves the ability of Japanese LLMs to solve Japanese medical license exams, surpassing 50\% in accuracy. In particular, the Japanese-centric models exhibit a more significant leap in improvement through instruction tuning compared to their English-centric counterparts. This underscores the importance of continual pretraining and the adjustment of the tokenizer in our local language. We also examine two slightly different prompt formats, resulting in non-negligible performance improvement.
|
2210.11111
|
Henrique Donancio
|
Henrique Don\^ancio and Laurent Vercouter and Harald Roclawski
|
The Pump Scheduling Problem: A Real-World Scenario for Reinforcement
Learning
| null | null | null | null |
cs.LG cs.AI cs.SY eess.SY
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Deep Reinforcement Learning (DRL) has achieved remarkable success in
scenarios such as games and has emerged as a potential solution for control
tasks. That is due to its ability to leverage scalability and handle complex
dynamics. However, few works have targeted environments grounded in real-world
settings. Indeed, real-world scenarios can be challenging, especially when
faced with the high dimensionality of the state space and unknown reward
function. We release a testbed consisting of an environment simulator and
demonstrations of human operation concerning pump scheduling of a real-world
water distribution facility to facilitate research. The pump scheduling problem
can be viewed as a decision process to decide when to operate pumps to supply
water while limiting electricity consumption and meeting system constraints. To
provide a starting point, we release a well-documented codebase, present an
overview of some challenges that can be addressed and provide a baseline
representation of the problem. The code and dataset are available at
https://gitlab.com/hdonancio/pumpscheduling.
|
[
{
"created": "Thu, 20 Oct 2022 09:16:03 GMT",
"version": "v1"
}
] |
2022-10-21
|
[
[
"Donâncio",
"Henrique",
""
],
[
"Vercouter",
"Laurent",
""
],
[
"Roclawski",
"Harald",
""
]
] |
Deep Reinforcement Learning (DRL) has achieved remarkable success in scenarios such as games and has emerged as a potential solution for control tasks. That is due to its ability to leverage scalability and handle complex dynamics. However, few works have targeted environments grounded in real-world settings. Indeed, real-world scenarios can be challenging, especially when faced with the high dimensionality of the state space and unknown reward function. We release a testbed consisting of an environment simulator and demonstrations of human operation concerning pump scheduling of a real-world water distribution facility to facilitate research. The pump scheduling problem can be viewed as a decision process to decide when to operate pumps to supply water while limiting electricity consumption and meeting system constraints. To provide a starting point, we release a well-documented codebase, present an overview of some challenges that can be addressed and provide a baseline representation of the problem. The code and dataset are available at https://gitlab.com/hdonancio/pumpscheduling.
|
1804.02608
|
Tauhid Zaman
|
Fanyu Que, Krishnan Rajagopalan, Tauhid Zaman
|
Penetrating a Social Network: The Follow-back Problem
|
38 pages, 14 figures
| null | null | null |
cs.SI physics.soc-ph stat.AP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Modern threats have emerged from the prevalence of social networks. Hostile
actors, such as extremist groups or foreign governments, utilize these networks
to run propaganda campaigns with different aims. For extremists, these
campaigns are designed for recruiting new members or inciting violence. For
foreign governments, the aim may be to create instability in rival nations.
Proper social network counter-measures are needed to combat these threats. Here
we present one important counter-measure: penetrating social networks. This
means making target users connect with or follow agents deployed in the social
network. Once such connections are established with the targets, the agents can
influence them by sharing content which counters the influence campaign. In
this work we study how to penetrate a social network, which we call the
follow-back problem. The goal here is to find a policy that maximizes the
number of targets that follow the agent.
We conduct an empirical study to understand what behavioral and network
features affect the probability of a target following an agent. We find that
the degree of the target and the size of the mutual neighborhood of the agent
and target in the network affect this probability. Based on our empirical
findings, we then propose a model for targets following an agent. Using this
model, we solve the follow-back problem exactly on directed acyclic graphs and
derive a closed form expression for the expected number of follows an agent
receives under the optimal policy. We then formulate the follow-back problem on
an arbitrary graph as an integer program. To evaluate our integer program based
policies, we conduct simulations on real social network topologies in Twitter.
We find that our polices result in more effective network penetration, with
significant increases in the expected number of targets that follow the agent.
|
[
{
"created": "Sun, 8 Apr 2018 00:58:51 GMT",
"version": "v1"
}
] |
2018-04-10
|
[
[
"Que",
"Fanyu",
""
],
[
"Rajagopalan",
"Krishnan",
""
],
[
"Zaman",
"Tauhid",
""
]
] |
Modern threats have emerged from the prevalence of social networks. Hostile actors, such as extremist groups or foreign governments, utilize these networks to run propaganda campaigns with different aims. For extremists, these campaigns are designed for recruiting new members or inciting violence. For foreign governments, the aim may be to create instability in rival nations. Proper social network counter-measures are needed to combat these threats. Here we present one important counter-measure: penetrating social networks. This means making target users connect with or follow agents deployed in the social network. Once such connections are established with the targets, the agents can influence them by sharing content which counters the influence campaign. In this work we study how to penetrate a social network, which we call the follow-back problem. The goal here is to find a policy that maximizes the number of targets that follow the agent. We conduct an empirical study to understand what behavioral and network features affect the probability of a target following an agent. We find that the degree of the target and the size of the mutual neighborhood of the agent and target in the network affect this probability. Based on our empirical findings, we then propose a model for targets following an agent. Using this model, we solve the follow-back problem exactly on directed acyclic graphs and derive a closed form expression for the expected number of follows an agent receives under the optimal policy. We then formulate the follow-back problem on an arbitrary graph as an integer program. To evaluate our integer program based policies, we conduct simulations on real social network topologies in Twitter. We find that our polices result in more effective network penetration, with significant increases in the expected number of targets that follow the agent.
|
1403.5022
|
Jason Williams
|
Jason L. Williams
|
An efficient, variational approximation of the best fitting
multi-Bernoulli filter
|
Accepted, IEEE Transactions on Signal Processing,
http://dx.doi.org/10.1109/TSP.2014.2370946
|
IEEE Transactions on Signal Processing, vol 63, no 1, pp 258-273,
January 2015
|
10.1109/TSP.2014.2370946
| null |
cs.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The joint probabilistic data association (JPDA) filter is a popular tracking
methodology for problems involving well-spaced targets, but it is rarely
applied in problems with closely-spaced targets due to its complexity in these
cases, and due to the well-known phenomenon of coalescence. This paper
addresses these difficulties using random finite sets (RFSs) and variational
inference, deriving a highly tractable, approximate method for obtaining the
multi-Bernoulli distribution that minimises the set Kullback-Leibler (KL)
divergence from the true posterior, working within the RFS framework to
incorporate uncertainty in target existence. The derivation is interpreted as
an application of expectation-maximisation (EM), where the missing data is the
correspondence of Bernoulli components (i.e., tracks) under each data
association hypothesis. The missing data is shown to play an identical role to
the selection of an ordered distribution in the same ordered family in the set
JPDA algorithm. Subsequently, a special case of the proposed method is utilised
to provide an efficient approximation of the minimum mean optimal sub-pattern
assignment estimator. The performance of the proposed methods is demonstrated
in challenging scenarios in which up to twenty targets come into close
proximity.
|
[
{
"created": "Thu, 20 Mar 2014 01:57:53 GMT",
"version": "v1"
},
{
"created": "Wed, 19 Nov 2014 00:42:44 GMT",
"version": "v2"
}
] |
2014-12-16
|
[
[
"Williams",
"Jason L.",
""
]
] |
The joint probabilistic data association (JPDA) filter is a popular tracking methodology for problems involving well-spaced targets, but it is rarely applied in problems with closely-spaced targets due to its complexity in these cases, and due to the well-known phenomenon of coalescence. This paper addresses these difficulties using random finite sets (RFSs) and variational inference, deriving a highly tractable, approximate method for obtaining the multi-Bernoulli distribution that minimises the set Kullback-Leibler (KL) divergence from the true posterior, working within the RFS framework to incorporate uncertainty in target existence. The derivation is interpreted as an application of expectation-maximisation (EM), where the missing data is the correspondence of Bernoulli components (i.e., tracks) under each data association hypothesis. The missing data is shown to play an identical role to the selection of an ordered distribution in the same ordered family in the set JPDA algorithm. Subsequently, a special case of the proposed method is utilised to provide an efficient approximation of the minimum mean optimal sub-pattern assignment estimator. The performance of the proposed methods is demonstrated in challenging scenarios in which up to twenty targets come into close proximity.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.