id
stringlengths 9
10
| submitter
stringlengths 1
64
⌀ | authors
stringlengths 4
20.7k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
404
⌀ | doi
stringlengths 11
153
⌀ | report-no
stringlengths 2
254
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | orig_abstract
stringlengths 14
3.35k
| versions
listlengths 1
60
| update_date
stringlengths 10
10
| authors_parsed
listlengths 1
1.35k
| abstract
stringlengths 11
3.34k
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1210.1161
|
Efi Papaptheocharous
|
Efi Papatheocharous, Harris Papadopoulos and Andreas S. Andreou
|
Feature Subset Selection for Software Cost Modelling and Estimation
|
Engineering Intelligent Systems Vol 18 (3/4) September/December 2010
| null | null | null |
cs.SE cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Feature selection has been recently used in the area of software engineering
for improving the accuracy and robustness of software cost models. The idea
behind selecting the most informative subset of features from a pool of
available cost drivers stems from the hypothesis that reducing the
dimensionality of datasets will significantly minimise the complexity and time
required to reach to an estimation using a particular modelling technique. This
work investigates the appropriateness of attributes, obtained from empirical
project databases and aims to reduce the cost drivers used while preserving
performance. Finding suitable subset selections that may cater improved
predictions may be considered as a pre-processing step of a particular
technique employed for cost estimation (filter or wrapper) or an internal
(embedded) step to minimise the fitting error. This paper compares nine
relatively popular feature selection methods and uses the empirical values of
selected attributes recorded in the ISBSG and Desharnais datasets to estimate
software development effort.
|
[
{
"created": "Wed, 3 Oct 2012 16:12:07 GMT",
"version": "v1"
}
] |
2023-12-21
|
[
[
"Papatheocharous",
"Efi",
""
],
[
"Papadopoulos",
"Harris",
""
],
[
"Andreou",
"Andreas S.",
""
]
] |
Feature selection has been recently used in the area of software engineering for improving the accuracy and robustness of software cost models. The idea behind selecting the most informative subset of features from a pool of available cost drivers stems from the hypothesis that reducing the dimensionality of datasets will significantly minimise the complexity and time required to reach to an estimation using a particular modelling technique. This work investigates the appropriateness of attributes, obtained from empirical project databases and aims to reduce the cost drivers used while preserving performance. Finding suitable subset selections that may cater improved predictions may be considered as a pre-processing step of a particular technique employed for cost estimation (filter or wrapper) or an internal (embedded) step to minimise the fitting error. This paper compares nine relatively popular feature selection methods and uses the empirical values of selected attributes recorded in the ISBSG and Desharnais datasets to estimate software development effort.
|
cs/0309032
|
Alexandre Tessier
|
Gerard Ferrand, Willy Lesaint, Alexandre Tessier
|
Towards declarative diagnosis of constraint programs over finite domains
|
In M. Ronsse, K. De Bosschere (eds), proceedings of the Fifth
International Workshop on Automated Debugging (AADEBUG 2003), September 2003,
Ghent. cs.SE/0309027
| null | null | null |
cs.SE
| null |
The paper proposes a theoretical approach of the debugging of constraint
programs based on a notion of explanation tree. The proposed approach is an
attempt to adapt algorithmic debugging to constraint programming. In this
theoretical framework for domain reduction, explanations are proof trees
explaining value removals. These proof trees are defined by inductive
definitions which express the removals of values as consequences of other value
removals. Explanations may be considered as the essence of constraint
programming. They are a declarative view of the computation trace. The
diagnosis consists in locating an error in an explanation rooted by a symptom.
|
[
{
"created": "Wed, 17 Sep 2003 12:42:58 GMT",
"version": "v1"
}
] |
2007-05-23
|
[
[
"Ferrand",
"Gerard",
""
],
[
"Lesaint",
"Willy",
""
],
[
"Tessier",
"Alexandre",
""
]
] |
The paper proposes a theoretical approach of the debugging of constraint programs based on a notion of explanation tree. The proposed approach is an attempt to adapt algorithmic debugging to constraint programming. In this theoretical framework for domain reduction, explanations are proof trees explaining value removals. These proof trees are defined by inductive definitions which express the removals of values as consequences of other value removals. Explanations may be considered as the essence of constraint programming. They are a declarative view of the computation trace. The diagnosis consists in locating an error in an explanation rooted by a symptom.
|
1311.5427
|
Gerardo Febres
|
Gerardo Febres, Klaus Jaffe, Carlos Gershenson
|
Complexity measurement of natural and artificial languages
|
29 pages, 11 figures, 3 tables, 2 appendixes
|
Complexity 20 6 429- (2015)
|
10.1002/cplx.21529
| null |
cs.CL cs.IT math.IT nlin.AO physics.soc-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We compared entropy for texts written in natural languages (English, Spanish)
and artificial languages (computer software) based on a simple expression for
the entropy as a function of message length and specific word diversity. Code
text written in artificial languages showed higher entropy than text of similar
length expressed in natural languages. Spanish texts exhibit more symbolic
diversity than English ones. Results showed that algorithms based on complexity
measures differentiate artificial from natural languages, and that text
analysis based on complexity measures allows the unveiling of important aspects
of their nature. We propose specific expressions to examine entropy related
aspects of tests and estimate the values of entropy, emergence,
self-organization and complexity based on specific diversity and message
length.
|
[
{
"created": "Wed, 20 Nov 2013 02:43:22 GMT",
"version": "v1"
},
{
"created": "Fri, 22 Nov 2013 06:02:18 GMT",
"version": "v2"
}
] |
2015-12-03
|
[
[
"Febres",
"Gerardo",
""
],
[
"Jaffe",
"Klaus",
""
],
[
"Gershenson",
"Carlos",
""
]
] |
We compared entropy for texts written in natural languages (English, Spanish) and artificial languages (computer software) based on a simple expression for the entropy as a function of message length and specific word diversity. Code text written in artificial languages showed higher entropy than text of similar length expressed in natural languages. Spanish texts exhibit more symbolic diversity than English ones. Results showed that algorithms based on complexity measures differentiate artificial from natural languages, and that text analysis based on complexity measures allows the unveiling of important aspects of their nature. We propose specific expressions to examine entropy related aspects of tests and estimate the values of entropy, emergence, self-organization and complexity based on specific diversity and message length.
|
1511.07038
|
Jakub Tarnawski
|
Ola Svensson and Jakub Tarnawski and L\'aszl\'o A. V\'egh
|
Constant Factor Approximation for ATSP with Two Edge Weights
| null |
Proc. of Integer Programming and Combinatorial Optimization: 18th
International Conference, IPCO 2016, pages 226-237
|
10.1007/978-3-319-33461-5_19
| null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We give a constant factor approximation algorithm for the Asymmetric
Traveling Salesman Problem on shortest path metrics of directed graphs with two
different edge weights. For the case of unit edge weights, the first constant
factor approximation was given recently by Svensson. This was accomplished by
introducing an easier problem called Local-Connectivity ATSP and showing that a
good solution to this problem can be used to obtain a constant factor
approximation for ATSP. In this paper, we solve Local-Connectivity ATSP for two
different edge weights. The solution is based on a flow decomposition theorem
for solutions of the Held-Karp relaxation, which may be of independent
interest.
|
[
{
"created": "Sun, 22 Nov 2015 17:42:34 GMT",
"version": "v1"
},
{
"created": "Mon, 4 Sep 2017 13:58:47 GMT",
"version": "v2"
}
] |
2017-09-05
|
[
[
"Svensson",
"Ola",
""
],
[
"Tarnawski",
"Jakub",
""
],
[
"Végh",
"László A.",
""
]
] |
We give a constant factor approximation algorithm for the Asymmetric Traveling Salesman Problem on shortest path metrics of directed graphs with two different edge weights. For the case of unit edge weights, the first constant factor approximation was given recently by Svensson. This was accomplished by introducing an easier problem called Local-Connectivity ATSP and showing that a good solution to this problem can be used to obtain a constant factor approximation for ATSP. In this paper, we solve Local-Connectivity ATSP for two different edge weights. The solution is based on a flow decomposition theorem for solutions of the Held-Karp relaxation, which may be of independent interest.
|
2304.13174
|
Jiechao Gao
|
Xiao-Yang Liu, Ziyi Xia, Hongyang Yang, Jiechao Gao, Daochen Zha, Ming
Zhu, Christina Dan Wang, Zhaoran Wang, Jian Guo
|
Dynamic Datasets and Market Environments for Financial Reinforcement
Learning
|
49 pages, 15 figures. arXiv admin note: substantial text overlap with
arXiv:2211.03107
| null | null | null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The financial market is a particularly challenging playground for deep
reinforcement learning due to its unique feature of dynamic datasets. Building
high-quality market environments for training financial reinforcement learning
(FinRL) agents is difficult due to major factors such as the low
signal-to-noise ratio of financial data, survivorship bias of historical data,
and model overfitting. In this paper, we present FinRL-Meta, a data-centric and
openly accessible library that processes dynamic datasets from real-world
markets into gym-style market environments and has been actively maintained by
the AI4Finance community. First, following a DataOps paradigm, we provide
hundreds of market environments through an automatic data curation pipeline.
Second, we provide homegrown examples and reproduce popular research papers as
stepping stones for users to design new trading strategies. We also deploy the
library on cloud platforms so that users can visualize their own results and
assess the relative performance via community-wise competitions. Third, we
provide dozens of Jupyter/Python demos organized into a curriculum and a
documentation website to serve the rapidly growing community. The open-source
codes for the data curation pipeline are available at
https://github.com/AI4Finance-Foundation/FinRL-Meta
|
[
{
"created": "Tue, 25 Apr 2023 22:17:31 GMT",
"version": "v1"
}
] |
2023-04-27
|
[
[
"Liu",
"Xiao-Yang",
""
],
[
"Xia",
"Ziyi",
""
],
[
"Yang",
"Hongyang",
""
],
[
"Gao",
"Jiechao",
""
],
[
"Zha",
"Daochen",
""
],
[
"Zhu",
"Ming",
""
],
[
"Wang",
"Christina Dan",
""
],
[
"Wang",
"Zhaoran",
""
],
[
"Guo",
"Jian",
""
]
] |
The financial market is a particularly challenging playground for deep reinforcement learning due to its unique feature of dynamic datasets. Building high-quality market environments for training financial reinforcement learning (FinRL) agents is difficult due to major factors such as the low signal-to-noise ratio of financial data, survivorship bias of historical data, and model overfitting. In this paper, we present FinRL-Meta, a data-centric and openly accessible library that processes dynamic datasets from real-world markets into gym-style market environments and has been actively maintained by the AI4Finance community. First, following a DataOps paradigm, we provide hundreds of market environments through an automatic data curation pipeline. Second, we provide homegrown examples and reproduce popular research papers as stepping stones for users to design new trading strategies. We also deploy the library on cloud platforms so that users can visualize their own results and assess the relative performance via community-wise competitions. Third, we provide dozens of Jupyter/Python demos organized into a curriculum and a documentation website to serve the rapidly growing community. The open-source codes for the data curation pipeline are available at https://github.com/AI4Finance-Foundation/FinRL-Meta
|
2106.09895
|
Hengyi Zheng
|
Hengyi Zheng, Rui Wen, Xi Chen, Yifan Yang, Yunyan Zhang, Ziheng
Zhang, Ningyu Zhang, Bin Qin, Ming Xu, Yefeng Zheng
|
PRGC: Potential Relation and Global Correspondence Based Joint
Relational Triple Extraction
|
Accepted by ACL 2021
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Joint extraction of entities and relations from unstructured texts is a
crucial task in information extraction. Recent methods achieve considerable
performance but still suffer from some inherent limitations, such as redundancy
of relation prediction, poor generalization of span-based extraction and
inefficiency. In this paper, we decompose this task into three subtasks,
Relation Judgement, Entity Extraction and Subject-object Alignment from a novel
perspective and then propose a joint relational triple extraction framework
based on Potential Relation and Global Correspondence (PRGC). Specifically, we
design a component to predict potential relations, which constrains the
following entity extraction to the predicted relation subset rather than all
relations; then a relation-specific sequence tagging component is applied to
handle the overlapping problem between subjects and objects; finally, a global
correspondence component is designed to align the subject and object into a
triple with low-complexity. Extensive experiments show that PRGC achieves
state-of-the-art performance on public benchmarks with higher efficiency and
delivers consistent performance gain on complex scenarios of overlapping
triples.
|
[
{
"created": "Fri, 18 Jun 2021 03:38:07 GMT",
"version": "v1"
}
] |
2021-06-21
|
[
[
"Zheng",
"Hengyi",
""
],
[
"Wen",
"Rui",
""
],
[
"Chen",
"Xi",
""
],
[
"Yang",
"Yifan",
""
],
[
"Zhang",
"Yunyan",
""
],
[
"Zhang",
"Ziheng",
""
],
[
"Zhang",
"Ningyu",
""
],
[
"Qin",
"Bin",
""
],
[
"Xu",
"Ming",
""
],
[
"Zheng",
"Yefeng",
""
]
] |
Joint extraction of entities and relations from unstructured texts is a crucial task in information extraction. Recent methods achieve considerable performance but still suffer from some inherent limitations, such as redundancy of relation prediction, poor generalization of span-based extraction and inefficiency. In this paper, we decompose this task into three subtasks, Relation Judgement, Entity Extraction and Subject-object Alignment from a novel perspective and then propose a joint relational triple extraction framework based on Potential Relation and Global Correspondence (PRGC). Specifically, we design a component to predict potential relations, which constrains the following entity extraction to the predicted relation subset rather than all relations; then a relation-specific sequence tagging component is applied to handle the overlapping problem between subjects and objects; finally, a global correspondence component is designed to align the subject and object into a triple with low-complexity. Extensive experiments show that PRGC achieves state-of-the-art performance on public benchmarks with higher efficiency and delivers consistent performance gain on complex scenarios of overlapping triples.
|
2202.12076
|
Wei Zhai
|
Liangsheng Lu, Wei Zhai, Hongchen Luo, Yu Kang and Yang Cao
|
Phrase-Based Affordance Detection via Cyclic Bilateral Interaction
| null | null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Affordance detection, which refers to perceiving objects with potential
action possibilities in images, is a challenging task since the possible
affordance depends on the person's purpose in real-world application scenarios.
The existing works mainly extract the inherent human-object dependencies from
image/video to accommodate affordance properties that change dynamically. In
this paper, we explore to perceive affordance from a vision-language
perspective and consider the challenging phrase-based affordance detection
problem,i.e., given a set of phrases describing the action purposes, all the
object regions in a scene with the same affordance should be detected. To this
end, we propose a cyclic bilateral consistency enhancement network (CBCE-Net)
to align language and vision features progressively. Specifically, the
presented CBCE-Net consists of a mutual guided vision-language module that
updates the common features of vision and language in a progressive manner, and
a cyclic interaction module (CIM) that facilitates the perception of possible
interaction with objects in a cyclic manner. In addition, we extend the public
Purpose-driven Affordance Dataset (PAD) by annotating affordance categories
with short phrases. The contrastive experimental results demonstrate the
superiority of our method over nine typical methods from four relevant fields
in terms of both objective metrics and visual quality. The related code and
dataset will be released at \url{https://github.com/lulsheng/CBCE-Net}.
|
[
{
"created": "Thu, 24 Feb 2022 13:02:27 GMT",
"version": "v1"
},
{
"created": "Fri, 25 Feb 2022 03:25:33 GMT",
"version": "v2"
}
] |
2022-02-28
|
[
[
"Lu",
"Liangsheng",
""
],
[
"Zhai",
"Wei",
""
],
[
"Luo",
"Hongchen",
""
],
[
"Kang",
"Yu",
""
],
[
"Cao",
"Yang",
""
]
] |
Affordance detection, which refers to perceiving objects with potential action possibilities in images, is a challenging task since the possible affordance depends on the person's purpose in real-world application scenarios. The existing works mainly extract the inherent human-object dependencies from image/video to accommodate affordance properties that change dynamically. In this paper, we explore to perceive affordance from a vision-language perspective and consider the challenging phrase-based affordance detection problem,i.e., given a set of phrases describing the action purposes, all the object regions in a scene with the same affordance should be detected. To this end, we propose a cyclic bilateral consistency enhancement network (CBCE-Net) to align language and vision features progressively. Specifically, the presented CBCE-Net consists of a mutual guided vision-language module that updates the common features of vision and language in a progressive manner, and a cyclic interaction module (CIM) that facilitates the perception of possible interaction with objects in a cyclic manner. In addition, we extend the public Purpose-driven Affordance Dataset (PAD) by annotating affordance categories with short phrases. The contrastive experimental results demonstrate the superiority of our method over nine typical methods from four relevant fields in terms of both objective metrics and visual quality. The related code and dataset will be released at \url{https://github.com/lulsheng/CBCE-Net}.
|
2309.15234
|
Weizheng Wang
|
Weizheng Wang, Le Mao, Ruiqi Wang, and Byung-Cheol Min
|
Multi-Robot Cooperative Socially-Aware Navigation Using Multi-Agent
Reinforcement Learning
|
To appear in ICRA2024
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In public spaces shared with humans, ensuring multi-robot systems navigate
without collisions while respecting social norms is challenging, particularly
with limited communication. Although current robot social navigation techniques
leverage advances in reinforcement learning and deep learning, they frequently
overlook robot dynamics in simulations, leading to a simulation-to-reality gap.
In this paper, we bridge this gap by presenting a new multi-robot social
navigation environment crafted using Dec-POSMDP and multi-agent reinforcement
learning. Furthermore, we introduce SAMARL: a novel benchmark for cooperative
multi-robot social navigation. SAMARL employs a unique spatial-temporal
transformer combined with multi-agent reinforcement learning. This approach
effectively captures the complex interactions between robots and humans, thus
promoting cooperative tendencies in multi-robot systems. Our extensive
experiments reveal that SAMARL outperforms existing baseline and ablation
models in our designed environment. Demo videos for this work can be found at:
https://sites.google.com/view/samarl
|
[
{
"created": "Tue, 26 Sep 2023 19:56:21 GMT",
"version": "v1"
},
{
"created": "Wed, 15 May 2024 19:57:59 GMT",
"version": "v2"
}
] |
2024-05-17
|
[
[
"Wang",
"Weizheng",
""
],
[
"Mao",
"Le",
""
],
[
"Wang",
"Ruiqi",
""
],
[
"Min",
"Byung-Cheol",
""
]
] |
In public spaces shared with humans, ensuring multi-robot systems navigate without collisions while respecting social norms is challenging, particularly with limited communication. Although current robot social navigation techniques leverage advances in reinforcement learning and deep learning, they frequently overlook robot dynamics in simulations, leading to a simulation-to-reality gap. In this paper, we bridge this gap by presenting a new multi-robot social navigation environment crafted using Dec-POSMDP and multi-agent reinforcement learning. Furthermore, we introduce SAMARL: a novel benchmark for cooperative multi-robot social navigation. SAMARL employs a unique spatial-temporal transformer combined with multi-agent reinforcement learning. This approach effectively captures the complex interactions between robots and humans, thus promoting cooperative tendencies in multi-robot systems. Our extensive experiments reveal that SAMARL outperforms existing baseline and ablation models in our designed environment. Demo videos for this work can be found at: https://sites.google.com/view/samarl
|
2210.00856
|
Georges-Axel Jaloyan
|
Hadrien Barral, Georges-Axel Jaloyan, Fabien Thomas-Brans, Matthieu
Regnery, R\'emi G\'eraud-Stewart, Thibaut Heckmann, Thomas Souvignet, David
Naccache
|
A forensic analysis of the Google Home: repairing compressed data
without error correction
|
28 pages, modified version of paper that appeared originally at
Forensic Science International: Digital Investigation
|
Forensic Science International: Digital Investigation, Volume 42,
2022, 301437, ISSN 2666-2817
|
10.1016/j.fsidi.2022.301437
| null |
cs.CR cs.IR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
This paper provides a detailed explanation of the steps taken to extract and
repair a Google Home's internal data. Starting with reverse engineering the
hardware of a commercial off-the-shelf Google Home, internal data is then
extracted by desoldering and dumping the flash memory. As error correction is
performed by the CPU using an undisclosed method, a new alternative method is
shown to repair a corrupted SquashFS filesystem, under the assumption of a
single or double bitflip per gzip-compressed fragment. Finally, a new method to
handle multiple possible repairs using three-valued logic is presented.
|
[
{
"created": "Fri, 30 Sep 2022 03:17:38 GMT",
"version": "v1"
}
] |
2022-10-04
|
[
[
"Barral",
"Hadrien",
""
],
[
"Jaloyan",
"Georges-Axel",
""
],
[
"Thomas-Brans",
"Fabien",
""
],
[
"Regnery",
"Matthieu",
""
],
[
"Géraud-Stewart",
"Rémi",
""
],
[
"Heckmann",
"Thibaut",
""
],
[
"Souvignet",
"Thomas",
""
],
[
"Naccache",
"David",
""
]
] |
This paper provides a detailed explanation of the steps taken to extract and repair a Google Home's internal data. Starting with reverse engineering the hardware of a commercial off-the-shelf Google Home, internal data is then extracted by desoldering and dumping the flash memory. As error correction is performed by the CPU using an undisclosed method, a new alternative method is shown to repair a corrupted SquashFS filesystem, under the assumption of a single or double bitflip per gzip-compressed fragment. Finally, a new method to handle multiple possible repairs using three-valued logic is presented.
|
1208.5997
|
Heba Ezzat
|
Heba Ezzat Ibrahim, Sherif M. Badr and Mohamed A. Shaheen
|
Phases vs. Levels using Decision Trees for Intrusion Detection Systems
|
7 pages; (IJCSIS) International Journal of Computer Science and
Information Security, Vol. 10, No. 8, 2012
| null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Security of computers and the networks that connect them is increasingly
becoming of great significance. Intrusion detection system is one of the
security defense tools for computer networks. This paper compares two different
model Approaches for representing intrusion detection system by using decision
tree techniques. These approaches are Phase-model approach and Level-model
approach. Each model is implemented by using two techniques, New Attacks and
Data partitioning techniques. The experimental results showed that Phase
approach has higher classification rate in both New Attacks and Data
Partitioning techniques than Level approach.
|
[
{
"created": "Wed, 29 Aug 2012 19:24:24 GMT",
"version": "v1"
}
] |
2012-08-30
|
[
[
"Ibrahim",
"Heba Ezzat",
""
],
[
"Badr",
"Sherif M.",
""
],
[
"Shaheen",
"Mohamed A.",
""
]
] |
Security of computers and the networks that connect them is increasingly becoming of great significance. Intrusion detection system is one of the security defense tools for computer networks. This paper compares two different model Approaches for representing intrusion detection system by using decision tree techniques. These approaches are Phase-model approach and Level-model approach. Each model is implemented by using two techniques, New Attacks and Data partitioning techniques. The experimental results showed that Phase approach has higher classification rate in both New Attacks and Data Partitioning techniques than Level approach.
|
1511.02058
|
Hung-Hsuan Chen
|
Hung-Hsuan Chen, Alexander G. Ororbia II, C. Lee Giles
|
ExpertSeer: a Keyphrase Based Expert Recommender for Digital Libraries
| null | null | null | null |
cs.DL cs.IR
|
http://creativecommons.org/licenses/by/4.0/
|
We describe ExpertSeer, a generic framework for expert recommendation based
on the contents of a digital library. Given a query term q, ExpertSeer
recommends experts of q by retrieving authors who published relevant papers
determined by related keyphrases and the quality of papers. The system is based
on a simple yet effective keyphrase extractor and the Bayes' rule for expert
recommendation. ExpertSeer is domain independent and can be applied to
different disciplines and applications since the system is automated and not
tailored to a specific discipline. Digital library providers can employ the
system to enrich their services and organizations can discover experts of
interest within an organization. To demonstrate the power of ExpertSeer, we
apply the framework to build two expert recommender systems. The first, CSSeer,
utilizes the CiteSeerX digital library to recommend experts primarily in
computer science. The second, ChemSeer, uses publicly available documents from
the Royal Society of Chemistry (RSC) to recommend experts in chemistry. Using
one thousand computer science terms as benchmark queries, we compared the top-n
experts (n=3, 5, 10) returned by CSSeer to two other expert recommenders --
Microsoft Academic Search and ArnetMiner -- and a simulator that imitates the
ranking function of Google Scholar. Although CSSeer, Microsoft Academic Search,
and ArnetMiner mostly return prestigious researchers who published several
papers related to the query term, it was found that different expert
recommenders return moderately different recommendations. To further study
their performance, we obtained a widely used benchmark dataset as the ground
truth for comparison. The results show that our system outperforms Microsoft
Academic Search and ArnetMiner in terms of Precision-at-k (P@k) for k=3, 5, 10.
We also conducted several case studies to validate the usefulness of our
system.
|
[
{
"created": "Fri, 6 Nov 2015 12:55:17 GMT",
"version": "v1"
}
] |
2015-11-09
|
[
[
"Chen",
"Hung-Hsuan",
""
],
[
"Ororbia",
"Alexander G.",
"II"
],
[
"Giles",
"C. Lee",
""
]
] |
We describe ExpertSeer, a generic framework for expert recommendation based on the contents of a digital library. Given a query term q, ExpertSeer recommends experts of q by retrieving authors who published relevant papers determined by related keyphrases and the quality of papers. The system is based on a simple yet effective keyphrase extractor and the Bayes' rule for expert recommendation. ExpertSeer is domain independent and can be applied to different disciplines and applications since the system is automated and not tailored to a specific discipline. Digital library providers can employ the system to enrich their services and organizations can discover experts of interest within an organization. To demonstrate the power of ExpertSeer, we apply the framework to build two expert recommender systems. The first, CSSeer, utilizes the CiteSeerX digital library to recommend experts primarily in computer science. The second, ChemSeer, uses publicly available documents from the Royal Society of Chemistry (RSC) to recommend experts in chemistry. Using one thousand computer science terms as benchmark queries, we compared the top-n experts (n=3, 5, 10) returned by CSSeer to two other expert recommenders -- Microsoft Academic Search and ArnetMiner -- and a simulator that imitates the ranking function of Google Scholar. Although CSSeer, Microsoft Academic Search, and ArnetMiner mostly return prestigious researchers who published several papers related to the query term, it was found that different expert recommenders return moderately different recommendations. To further study their performance, we obtained a widely used benchmark dataset as the ground truth for comparison. The results show that our system outperforms Microsoft Academic Search and ArnetMiner in terms of Precision-at-k (P@k) for k=3, 5, 10. We also conducted several case studies to validate the usefulness of our system.
|
1804.02795
|
Gangshan Jing
|
Gangshan Jing, Guofeng Zhang, Heung Wing Joseph Lee, and Long Wang
|
Weak Rigidity Theory and its Application to Multi-agent Formation
Stabilization
|
This paper has been accepted by SIAM Journal on Control and
Optimization
| null | null | null |
cs.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper introduces the notion of weak rigidity to characterize a framework
by pairwise inner products of inter-agent displacements. Compared to
distance-based rigidity, weak rigidity requires fewer constrained edges in the
graph to determine a geometric shape in an arbitrarily dimensional space. A
necessary and sufficient graphical condition for infinitesimal weak rigidity of
planar frameworks is derived. As an application of the proposed weak rigidity
theory, a gradient based control law and a non-gradient based control law are
designed for a group of single-integrator modeled agents to stabilize a desired
formation shape, respectively. Using the gradient control law, we prove that an
infinitesimally weakly rigid formation is locally exponentially stable. In
particular, if the number of agents is one greater than the dimension of the
space, a minimally infinitesimally weakly rigid formation is almost globally
asymptotically stable. In the literature of rigid formation, the sensing graph
is always required to be rigid. Using the non-gradient control law based on
weak rigidity theory, the sensing graph is unnecessary to be rigid for local
exponential stability of the formation. A numerical simulation is performed for
illustrating effectiveness of our main results.
|
[
{
"created": "Mon, 9 Apr 2018 02:23:31 GMT",
"version": "v1"
}
] |
2018-04-10
|
[
[
"Jing",
"Gangshan",
""
],
[
"Zhang",
"Guofeng",
""
],
[
"Lee",
"Heung Wing Joseph",
""
],
[
"Wang",
"Long",
""
]
] |
This paper introduces the notion of weak rigidity to characterize a framework by pairwise inner products of inter-agent displacements. Compared to distance-based rigidity, weak rigidity requires fewer constrained edges in the graph to determine a geometric shape in an arbitrarily dimensional space. A necessary and sufficient graphical condition for infinitesimal weak rigidity of planar frameworks is derived. As an application of the proposed weak rigidity theory, a gradient based control law and a non-gradient based control law are designed for a group of single-integrator modeled agents to stabilize a desired formation shape, respectively. Using the gradient control law, we prove that an infinitesimally weakly rigid formation is locally exponentially stable. In particular, if the number of agents is one greater than the dimension of the space, a minimally infinitesimally weakly rigid formation is almost globally asymptotically stable. In the literature of rigid formation, the sensing graph is always required to be rigid. Using the non-gradient control law based on weak rigidity theory, the sensing graph is unnecessary to be rigid for local exponential stability of the formation. A numerical simulation is performed for illustrating effectiveness of our main results.
|
2307.06318
|
Madalina Erascu
|
Vlad-Ioan Luca and Madalina Erascu
|
SAGE -- A Tool for Optimal Deployments in Kubernetes Clusters
| null | null | null | null |
cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
Cloud computing has brought a fundamental transformation in how organizations
operate their applications, enabling them to achieve affordable high
availability of services. Kubernetes has emerged as the preferred choice for
container orchestration and service management across many Cloud computing
platforms. The scheduler in Kubernetes plays a crucial role in determining the
placement of newly deployed service containers. However, the default scheduler,
while fast, often lacks optimization, leading to inefficient service placement
or even deployment failures.
This paper introduces SAGE, a tool for optimal solutions in Kubernetes
clusters that can also assist the Kubernetes default scheduler and any other
custom scheduler in application deployment. SAGE computes an optimal deployment
plan based on the constraints of the application to be deployed and the
available Cloud resources. We show the potential benefits of using SAGE by
considering test cases with various characteristics. It turns out that SAGE
surpasses other schedulers by comprehensively analyzing the application demand
and cluster image. This ability allows it to better understand the needs of the
pods, resulting in consistently optimal solutions across all scenarios. The
accompanying material of this paper is publicly available at
https://github.com/SAGE-Project/SAGE-Predeployer.
|
[
{
"created": "Wed, 12 Jul 2023 17:29:28 GMT",
"version": "v1"
}
] |
2023-07-13
|
[
[
"Luca",
"Vlad-Ioan",
""
],
[
"Erascu",
"Madalina",
""
]
] |
Cloud computing has brought a fundamental transformation in how organizations operate their applications, enabling them to achieve affordable high availability of services. Kubernetes has emerged as the preferred choice for container orchestration and service management across many Cloud computing platforms. The scheduler in Kubernetes plays a crucial role in determining the placement of newly deployed service containers. However, the default scheduler, while fast, often lacks optimization, leading to inefficient service placement or even deployment failures. This paper introduces SAGE, a tool for optimal solutions in Kubernetes clusters that can also assist the Kubernetes default scheduler and any other custom scheduler in application deployment. SAGE computes an optimal deployment plan based on the constraints of the application to be deployed and the available Cloud resources. We show the potential benefits of using SAGE by considering test cases with various characteristics. It turns out that SAGE surpasses other schedulers by comprehensively analyzing the application demand and cluster image. This ability allows it to better understand the needs of the pods, resulting in consistently optimal solutions across all scenarios. The accompanying material of this paper is publicly available at https://github.com/SAGE-Project/SAGE-Predeployer.
|
1207.1161
|
Manish Gupta
|
Abhishek Chhajer and Manish K. Gupta and Sandeep Vasani and Jaley
Dholakiya
|
Modular Arithmetic Expressions and Primality Testing via DNA
Self-Assembly
| null | null | null | null |
cs.ET cs.DS cs.MS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Self-assembly is a fundamental process by which supramolecular species form
spontaneously from their components. This process is ubiquitous throughout the
life chemistry and is central to biological information processing. Algorithms
for solving many mathematical and computational problems via tile self assembly
have been proposed by many researchers in the last decade. In particular tile
set for doing basic arithmetic of two inputs have been given. In this work we
give tile set for doing basic arithmetic (addition, subtraction,
multiplication) of n inputs and subsequently computing its modulo. We also
present a tile set for primality testing. Finally we present a software
'xtilemod' for doing modular arithmetic. This simplifies the task of creating
the input files to xgrow simulator for doing basic (addition, subtraction,
multiplication and division) as well as modular arithmetic of n inputs. Similar
software for creating tile set for primality testing is also given.
|
[
{
"created": "Thu, 5 Jul 2012 04:55:03 GMT",
"version": "v1"
}
] |
2012-07-06
|
[
[
"Chhajer",
"Abhishek",
""
],
[
"Gupta",
"Manish K.",
""
],
[
"Vasani",
"Sandeep",
""
],
[
"Dholakiya",
"Jaley",
""
]
] |
Self-assembly is a fundamental process by which supramolecular species form spontaneously from their components. This process is ubiquitous throughout the life chemistry and is central to biological information processing. Algorithms for solving many mathematical and computational problems via tile self assembly have been proposed by many researchers in the last decade. In particular tile set for doing basic arithmetic of two inputs have been given. In this work we give tile set for doing basic arithmetic (addition, subtraction, multiplication) of n inputs and subsequently computing its modulo. We also present a tile set for primality testing. Finally we present a software 'xtilemod' for doing modular arithmetic. This simplifies the task of creating the input files to xgrow simulator for doing basic (addition, subtraction, multiplication and division) as well as modular arithmetic of n inputs. Similar software for creating tile set for primality testing is also given.
|
2307.01782
|
Sudeep Pasricha
|
Salma Afifi, Febin Sunny, Amin Shafiee, Mahdi Nikdast, Sudeep Pasricha
|
GHOST: A Graph Neural Network Accelerator using Silicon Photonics
| null | null | null | null |
cs.AR cs.AI cs.ET cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Graph neural networks (GNNs) have emerged as a powerful approach for
modelling and learning from graph-structured data. Multiple fields have since
benefitted enormously from the capabilities of GNNs, such as recommendation
systems, social network analysis, drug discovery, and robotics. However,
accelerating and efficiently processing GNNs require a unique approach that
goes beyond conventional artificial neural network accelerators, due to the
substantial computational and memory requirements of GNNs. The slowdown of
scaling in CMOS platforms also motivates a search for alternative
implementation substrates. In this paper, we present GHOST, the first
silicon-photonic hardware accelerator for GNNs. GHOST efficiently alleviates
the costs associated with both vertex-centric and edge-centric operations. It
implements separately the three main stages involved in running GNNs in the
optical domain, allowing it to be used for the inference of various widely used
GNN models and architectures, such as graph convolution networks and graph
attention networks. Our simulation studies indicate that GHOST exhibits at
least 10.2x better throughput and 3.8x better energy efficiency when compared
to GPU, TPU, CPU and multiple state-of-the-art GNN hardware accelerators.
|
[
{
"created": "Tue, 4 Jul 2023 15:37:20 GMT",
"version": "v1"
}
] |
2023-07-06
|
[
[
"Afifi",
"Salma",
""
],
[
"Sunny",
"Febin",
""
],
[
"Shafiee",
"Amin",
""
],
[
"Nikdast",
"Mahdi",
""
],
[
"Pasricha",
"Sudeep",
""
]
] |
Graph neural networks (GNNs) have emerged as a powerful approach for modelling and learning from graph-structured data. Multiple fields have since benefitted enormously from the capabilities of GNNs, such as recommendation systems, social network analysis, drug discovery, and robotics. However, accelerating and efficiently processing GNNs require a unique approach that goes beyond conventional artificial neural network accelerators, due to the substantial computational and memory requirements of GNNs. The slowdown of scaling in CMOS platforms also motivates a search for alternative implementation substrates. In this paper, we present GHOST, the first silicon-photonic hardware accelerator for GNNs. GHOST efficiently alleviates the costs associated with both vertex-centric and edge-centric operations. It implements separately the three main stages involved in running GNNs in the optical domain, allowing it to be used for the inference of various widely used GNN models and architectures, such as graph convolution networks and graph attention networks. Our simulation studies indicate that GHOST exhibits at least 10.2x better throughput and 3.8x better energy efficiency when compared to GPU, TPU, CPU and multiple state-of-the-art GNN hardware accelerators.
|
1608.04563
|
Der-Yeuan Yu
|
Der-Yeuan Yu, Aanjhan Ranganathan, Ramya Jayaram Masti, Claudio
Soriente, Srdjan Capkun
|
SALVE: Server Authentication with Location VErification
|
14 pages. This paper will be presented at the 22nd ACM International
Conference on Mobile Computing and Networking (MobiCom 2016). Related paper:
https://eprint.iacr.org/2015/230
| null |
10.1145/2973750.2973766
| null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Location Service (LCS) proposed by the telecommunication industry is an
architecture that allows the location of mobile devices to be accessed in
various applications. We explore the use of LCS in location-enhanced server
authentication, which traditionally relies on certificates. Given recent
incidents involving certificate authorities, various techniques to strengthen
server authentication were proposed. They focus on improving the certificate
validation process, such as pinning, revocation, or multi-path probing. In this
paper, we propose using the server's geographic location as a second factor of
its authenticity. Our solution, SALVE, achieves location-based server
authentication by using secure DNS resolution and by leveraging LCS for
location measurements. We develop a TLS extension that enables the client to
verify the server's location in addition to its certificate. Successful server
authentication therefore requires a valid certificate and the server's presence
at a legitimate geographic location, e.g., on the premises of a data center.
SALVE prevents server impersonation by remote adversaries with mis-issued
certificates or stolen private keys of the legitimate server. We develop a
prototype implementation and our evaluation in real-world settings shows that
it incurs minimal impact to the average server throughput. Our solution is
backward compatible and can be integrated with existing approaches for
improving server authentication in TLS.
|
[
{
"created": "Tue, 16 Aug 2016 12:10:57 GMT",
"version": "v1"
}
] |
2016-08-17
|
[
[
"Yu",
"Der-Yeuan",
""
],
[
"Ranganathan",
"Aanjhan",
""
],
[
"Masti",
"Ramya Jayaram",
""
],
[
"Soriente",
"Claudio",
""
],
[
"Capkun",
"Srdjan",
""
]
] |
The Location Service (LCS) proposed by the telecommunication industry is an architecture that allows the location of mobile devices to be accessed in various applications. We explore the use of LCS in location-enhanced server authentication, which traditionally relies on certificates. Given recent incidents involving certificate authorities, various techniques to strengthen server authentication were proposed. They focus on improving the certificate validation process, such as pinning, revocation, or multi-path probing. In this paper, we propose using the server's geographic location as a second factor of its authenticity. Our solution, SALVE, achieves location-based server authentication by using secure DNS resolution and by leveraging LCS for location measurements. We develop a TLS extension that enables the client to verify the server's location in addition to its certificate. Successful server authentication therefore requires a valid certificate and the server's presence at a legitimate geographic location, e.g., on the premises of a data center. SALVE prevents server impersonation by remote adversaries with mis-issued certificates or stolen private keys of the legitimate server. We develop a prototype implementation and our evaluation in real-world settings shows that it incurs minimal impact to the average server throughput. Our solution is backward compatible and can be integrated with existing approaches for improving server authentication in TLS.
|
2001.03416
|
Saptarshi Kumar Lahiri Mr.
|
Saptarshi Kumar Lahiri, Kanishka Bhattacharya, Amit Shaw, L S
Ramachandra
|
A stable SPH with adaptive B-spline kernel
|
34 Pages, 22 Figures
|
Journal of Computational Physics 422(2020) 109761
|
10.1016/j.jcp.2020.109761
| null |
cs.CE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Tensile instability, often observed in smoothed particle hydrodynamics (SPH),
is a numerical artifact that manifests itself by unphysical clustering or
separation of particles. The instability originates in estimating the
derivatives of the smoothing functions which, when interact with material
constitution may result in negative stiffness in the discretized system. In the
present study, a stable formulation of SPH is developed where the kernel
function is continuously adapted at every material point depending on its state
of stress. Bspline basis function with a variable intermediate knot is used as
the kernel function. The shape of the kernel function is then modified by
changing the intermediate knot position such that the condition associated with
instability does not arise. While implementing the algorithm the simplicity and
computational efficiency of SPH are not compromised. One-dimensional dispersion
analysis is performed to understand the effect adaptive kernel on the
stability. Finally, the efficacy of the algorithm is demonstrated through some
benchmark elastic dynamics problems.
|
[
{
"created": "Sat, 4 Jan 2020 09:27:32 GMT",
"version": "v1"
}
] |
2020-08-26
|
[
[
"Lahiri",
"Saptarshi Kumar",
""
],
[
"Bhattacharya",
"Kanishka",
""
],
[
"Shaw",
"Amit",
""
],
[
"Ramachandra",
"L S",
""
]
] |
Tensile instability, often observed in smoothed particle hydrodynamics (SPH), is a numerical artifact that manifests itself by unphysical clustering or separation of particles. The instability originates in estimating the derivatives of the smoothing functions which, when interact with material constitution may result in negative stiffness in the discretized system. In the present study, a stable formulation of SPH is developed where the kernel function is continuously adapted at every material point depending on its state of stress. Bspline basis function with a variable intermediate knot is used as the kernel function. The shape of the kernel function is then modified by changing the intermediate knot position such that the condition associated with instability does not arise. While implementing the algorithm the simplicity and computational efficiency of SPH are not compromised. One-dimensional dispersion analysis is performed to understand the effect adaptive kernel on the stability. Finally, the efficacy of the algorithm is demonstrated through some benchmark elastic dynamics problems.
|
2212.11584
|
Yuanzhe Zhang
|
Yuanzhe Zhang, Shirui Pan and Jiangshan Yu
|
TxAllo: Dynamic Transaction Allocation in Sharded Blockchain Systems
|
Accepted by IEEE ICDE 2023
| null | null | null |
cs.DB cs.AI cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The scalability problem has been one of the most significant barriers
limiting the adoption of blockchains. Blockchain sharding is a promising
approach to this problem. However, the sharding mechanism introduces a
significant number of cross-shard transactions, which are expensive to process.
This paper focuses on the transaction allocation problem to reduce the number
of cross-shard transactions for better scalability. In particular, we
systematically formulate the transaction allocation problem and convert it to
the community detection problem on a graph. A deterministic and fast allocation
scheme TxAllo is proposed to dynamically infer the allocation of accounts and
their associated transactions. It directly optimizes the system throughput,
considering both the number of cross-shard transactions and the workload
balance among shards.
We evaluate the performance of TxAllo on an Ethereum dataset containing over
91 million transactions. Our evaluation results show that for a blockchain with
60 shards, TxAllo reduces the cross-shard transaction ratio from 98% (by using
traditional hash-based allocation) to about 12%. In the meantime, the workload
balance is well maintained. Compared with other methods, the execution time of
TxAllo is almost negligible. For example, when updating the allocation every
hour, the execution of TxAllo only takes 0.5 seconds on average, whereas other
concurrent works, such as BrokerChain (INFOCOM'22) leveraging the classic METIS
method, require 422 seconds.
|
[
{
"created": "Thu, 22 Dec 2022 10:22:31 GMT",
"version": "v1"
}
] |
2022-12-23
|
[
[
"Zhang",
"Yuanzhe",
""
],
[
"Pan",
"Shirui",
""
],
[
"Yu",
"Jiangshan",
""
]
] |
The scalability problem has been one of the most significant barriers limiting the adoption of blockchains. Blockchain sharding is a promising approach to this problem. However, the sharding mechanism introduces a significant number of cross-shard transactions, which are expensive to process. This paper focuses on the transaction allocation problem to reduce the number of cross-shard transactions for better scalability. In particular, we systematically formulate the transaction allocation problem and convert it to the community detection problem on a graph. A deterministic and fast allocation scheme TxAllo is proposed to dynamically infer the allocation of accounts and their associated transactions. It directly optimizes the system throughput, considering both the number of cross-shard transactions and the workload balance among shards. We evaluate the performance of TxAllo on an Ethereum dataset containing over 91 million transactions. Our evaluation results show that for a blockchain with 60 shards, TxAllo reduces the cross-shard transaction ratio from 98% (by using traditional hash-based allocation) to about 12%. In the meantime, the workload balance is well maintained. Compared with other methods, the execution time of TxAllo is almost negligible. For example, when updating the allocation every hour, the execution of TxAllo only takes 0.5 seconds on average, whereas other concurrent works, such as BrokerChain (INFOCOM'22) leveraging the classic METIS method, require 422 seconds.
|
2112.05136
|
Chuang Gan
|
Yining Hong, Li Yi, Joshua B. Tenenbaum, Antonio Torralba, Chuang Gan
|
PTR: A Benchmark for Part-based Conceptual, Relational, and Physical
Reasoning
|
NeurIPS 2021. Project page: http://ptr.csail.mit.edu/
| null | null | null |
cs.CV cs.AI cs.CL cs.LG
|
http://creativecommons.org/publicdomain/zero/1.0/
|
A critical aspect of human visual perception is the ability to parse visual
scenes into individual objects and further into object parts, forming
part-whole hierarchies. Such composite structures could induce a rich set of
semantic concepts and relations, thus playing an important role in the
interpretation and organization of visual signals as well as for the
generalization of visual perception and reasoning. However, existing visual
reasoning benchmarks mostly focus on objects rather than parts. Visual
reasoning based on the full part-whole hierarchy is much more challenging than
object-centric reasoning due to finer-grained concepts, richer geometry
relations, and more complex physics. Therefore, to better serve for part-based
conceptual, relational and physical reasoning, we introduce a new large-scale
diagnostic visual reasoning dataset named PTR. PTR contains around 70k RGBD
synthetic images with ground truth object and part level annotations regarding
semantic instance segmentation, color attributes, spatial and geometric
relationships, and certain physical properties such as stability. These images
are paired with 700k machine-generated questions covering various types of
reasoning types, making them a good testbed for visual reasoning models. We
examine several state-of-the-art visual reasoning models on this dataset and
observe that they still make many surprising mistakes in situations where
humans can easily infer the correct answer. We believe this dataset will open
up new opportunities for part-based reasoning.
|
[
{
"created": "Thu, 9 Dec 2021 18:59:34 GMT",
"version": "v1"
}
] |
2021-12-10
|
[
[
"Hong",
"Yining",
""
],
[
"Yi",
"Li",
""
],
[
"Tenenbaum",
"Joshua B.",
""
],
[
"Torralba",
"Antonio",
""
],
[
"Gan",
"Chuang",
""
]
] |
A critical aspect of human visual perception is the ability to parse visual scenes into individual objects and further into object parts, forming part-whole hierarchies. Such composite structures could induce a rich set of semantic concepts and relations, thus playing an important role in the interpretation and organization of visual signals as well as for the generalization of visual perception and reasoning. However, existing visual reasoning benchmarks mostly focus on objects rather than parts. Visual reasoning based on the full part-whole hierarchy is much more challenging than object-centric reasoning due to finer-grained concepts, richer geometry relations, and more complex physics. Therefore, to better serve for part-based conceptual, relational and physical reasoning, we introduce a new large-scale diagnostic visual reasoning dataset named PTR. PTR contains around 70k RGBD synthetic images with ground truth object and part level annotations regarding semantic instance segmentation, color attributes, spatial and geometric relationships, and certain physical properties such as stability. These images are paired with 700k machine-generated questions covering various types of reasoning types, making them a good testbed for visual reasoning models. We examine several state-of-the-art visual reasoning models on this dataset and observe that they still make many surprising mistakes in situations where humans can easily infer the correct answer. We believe this dataset will open up new opportunities for part-based reasoning.
|
1604.01846
|
Mirza Kibria
|
Mirza Golam Kibria and Shan Lin
|
Resource Allocation Optimization for Users with Different Levels of
Service in Multicarrier Systems
|
Resource Allocation in MU-OFDM
|
IEEE Signal Process. Lett., vol. 22, no. 11, pp. 1869-1873, Nov.
2015
|
10.1109/LSP.2015.2440440
| null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We optimize the throughput of a single cell multiuser orthogonal frequency
division multiplexing system with proportional data rate fairness among the
users. The concept is to support mobile users with different levels of service.
The optimization problem is a mixed integer nonlinear programming problem,
which is computationally very expensive. We propose a novel and efficient
near-optimal solution adopting a two-phase optimization approach that separates
the subcarrier and power allocation. In the first phase, we relax the strict
proportional data rate requirements and employ an iterative subcarrier
allocation approach that coarsely satisfies desired data rate proportionality
constraints. In the second phase, we reallocate the power among the users in an
iterative way to further enhance the adherence to the desired proportions by
exploiting the normalized proportionality deviation measure. The simulation
results show that the proposed solution exhibits very strong adherence to the
desired proportional data rate fairness while achieving higher system
throughput compared to the other existing solutions.
|
[
{
"created": "Thu, 7 Apr 2016 01:50:07 GMT",
"version": "v1"
}
] |
2016-04-08
|
[
[
"Kibria",
"Mirza Golam",
""
],
[
"Lin",
"Shan",
""
]
] |
We optimize the throughput of a single cell multiuser orthogonal frequency division multiplexing system with proportional data rate fairness among the users. The concept is to support mobile users with different levels of service. The optimization problem is a mixed integer nonlinear programming problem, which is computationally very expensive. We propose a novel and efficient near-optimal solution adopting a two-phase optimization approach that separates the subcarrier and power allocation. In the first phase, we relax the strict proportional data rate requirements and employ an iterative subcarrier allocation approach that coarsely satisfies desired data rate proportionality constraints. In the second phase, we reallocate the power among the users in an iterative way to further enhance the adherence to the desired proportions by exploiting the normalized proportionality deviation measure. The simulation results show that the proposed solution exhibits very strong adherence to the desired proportional data rate fairness while achieving higher system throughput compared to the other existing solutions.
|
2306.07724
|
Hakan Temiz
|
Hakan Temiz
|
Effects of Data Enrichment with Image Transformations on the Performance
of Deep Networks
| null |
The European Journal of Research and Development, 2(2), 23-33
(2022)
|
10.56038/ejrnd.v2i2.23
| null |
cs.CV cs.LG eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Images cannot always be expected to come in a certain standard format and
orientation. Deep networks need to be trained to take into account unexpected
variations in orientation or format. For this purpose, training data should be
enriched to include different conditions. In this study, the effects of data
enrichment on the performance of deep networks in the super resolution problem
were investigated experimentally. A total of six basic image transformations
were used for the enrichment procedures. In the experiments, two deep network
models were trained with variants of the ILSVRC2012 dataset enriched by these
six image transformation processes. Considering a single image transformation,
it has been observed that the data enriched with 180 degree rotation provides
the best results. The most unsuccessful result was obtained when the models
were trained on the enriched data generated by the flip upside down process.
Models scored highest when trained with a mix of all transformations.
|
[
{
"created": "Tue, 13 Jun 2023 12:22:54 GMT",
"version": "v1"
}
] |
2023-06-14
|
[
[
"Temiz",
"Hakan",
""
]
] |
Images cannot always be expected to come in a certain standard format and orientation. Deep networks need to be trained to take into account unexpected variations in orientation or format. For this purpose, training data should be enriched to include different conditions. In this study, the effects of data enrichment on the performance of deep networks in the super resolution problem were investigated experimentally. A total of six basic image transformations were used for the enrichment procedures. In the experiments, two deep network models were trained with variants of the ILSVRC2012 dataset enriched by these six image transformation processes. Considering a single image transformation, it has been observed that the data enriched with 180 degree rotation provides the best results. The most unsuccessful result was obtained when the models were trained on the enriched data generated by the flip upside down process. Models scored highest when trained with a mix of all transformations.
|
1610.06366
|
Flavio D'Alessandro
|
Flavio D'Alessandro, Oscar H. Ibarra, Ian McQuillan
|
On Finite-Index Indexed Grammars and Their Restrictions
|
16 pages, latest version
|
Information and Computation, Vol. 279, 2021, p. 1-13
|
10.1016/j.ic.2020.104613
| null |
cs.FL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The family, L(INDLIN), of languages generated by linear indexed grammars has
been studied in the literature. It is known that the Parikh image of every
language in L(INDLIN) is semi-linear. However, there are bounded semi linear
languages that are not in L(INDLIN). Here, we look at larger families of
(restricted) indexed languages and study their properties, their relationships,
and their decidability properties.
|
[
{
"created": "Thu, 20 Oct 2016 11:30:10 GMT",
"version": "v1"
},
{
"created": "Thu, 27 Oct 2016 06:31:18 GMT",
"version": "v2"
},
{
"created": "Fri, 28 Oct 2016 16:35:53 GMT",
"version": "v3"
},
{
"created": "Wed, 7 Dec 2022 16:47:42 GMT",
"version": "v4"
}
] |
2022-12-08
|
[
[
"D'Alessandro",
"Flavio",
""
],
[
"Ibarra",
"Oscar H.",
""
],
[
"McQuillan",
"Ian",
""
]
] |
The family, L(INDLIN), of languages generated by linear indexed grammars has been studied in the literature. It is known that the Parikh image of every language in L(INDLIN) is semi-linear. However, there are bounded semi linear languages that are not in L(INDLIN). Here, we look at larger families of (restricted) indexed languages and study their properties, their relationships, and their decidability properties.
|
2210.12582
|
Liliang Ren
|
Liliang Ren, Zixuan Zhang, Han Wang, Clare R. Voss, Chengxiang Zhai,
Heng Ji
|
Language Model Pre-Training with Sparse Latent Typing
|
EMNLP 2022 (Oral)
| null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Modern large-scale Pre-trained Language Models (PLMs) have achieved
tremendous success on a wide range of downstream tasks. However, most of the LM
pre-training objectives only focus on text reconstruction, but have not sought
to learn latent-level interpretable representations of sentences. In this
paper, we manage to push the language models to obtain a deeper understanding
of sentences by proposing a new pre-training objective, Sparse Latent Typing,
which enables the model to sparsely extract sentence-level keywords with
diverse latent types. Experimental results show that our model is able to learn
interpretable latent type categories in a self-supervised manner without using
any external knowledge. Besides, the language model pre-trained with such an
objective also significantly improves Information Extraction related downstream
tasks in both supervised and few-shot settings. Our code is publicly available
at: https://github.com/renll/SparseLT.
|
[
{
"created": "Sun, 23 Oct 2022 00:37:08 GMT",
"version": "v1"
},
{
"created": "Wed, 26 Oct 2022 22:41:30 GMT",
"version": "v2"
}
] |
2022-10-28
|
[
[
"Ren",
"Liliang",
""
],
[
"Zhang",
"Zixuan",
""
],
[
"Wang",
"Han",
""
],
[
"Voss",
"Clare R.",
""
],
[
"Zhai",
"Chengxiang",
""
],
[
"Ji",
"Heng",
""
]
] |
Modern large-scale Pre-trained Language Models (PLMs) have achieved tremendous success on a wide range of downstream tasks. However, most of the LM pre-training objectives only focus on text reconstruction, but have not sought to learn latent-level interpretable representations of sentences. In this paper, we manage to push the language models to obtain a deeper understanding of sentences by proposing a new pre-training objective, Sparse Latent Typing, which enables the model to sparsely extract sentence-level keywords with diverse latent types. Experimental results show that our model is able to learn interpretable latent type categories in a self-supervised manner without using any external knowledge. Besides, the language model pre-trained with such an objective also significantly improves Information Extraction related downstream tasks in both supervised and few-shot settings. Our code is publicly available at: https://github.com/renll/SparseLT.
|
2301.12974
|
Daniel Rugeles
|
Daniel Rugeles and Zhen Hai and Juan Felipe Carmona and Manoranjan
Dash and Gao Cong
|
Improving the Inference of Topic Models via Infinite Latent State
Replications
| null | null | null | null |
cs.CL cs.AI cs.LG math.ST stat.TH
|
http://creativecommons.org/licenses/by/4.0/
|
In text mining, topic models are a type of probabilistic generative models
for inferring latent semantic topics from text corpus. One of the most popular
inference approaches to topic models is perhaps collapsed Gibbs sampling (CGS),
which typically samples one single topic label for each observed document-word
pair. In this paper, we aim at improving the inference of CGS for topic models.
We propose to leverage state augmentation technique by maximizing the number of
topic samples to infinity, and then develop a new inference approach, called
infinite latent state replication (ILR), to generate robust soft topic
assignment for each given document-word pair. Experimental results on the
publicly available datasets show that ILR outperforms CGS for inference of
existing established topic models.
|
[
{
"created": "Wed, 25 Jan 2023 17:07:25 GMT",
"version": "v1"
}
] |
2023-01-31
|
[
[
"Rugeles",
"Daniel",
""
],
[
"Hai",
"Zhen",
""
],
[
"Carmona",
"Juan Felipe",
""
],
[
"Dash",
"Manoranjan",
""
],
[
"Cong",
"Gao",
""
]
] |
In text mining, topic models are a type of probabilistic generative models for inferring latent semantic topics from text corpus. One of the most popular inference approaches to topic models is perhaps collapsed Gibbs sampling (CGS), which typically samples one single topic label for each observed document-word pair. In this paper, we aim at improving the inference of CGS for topic models. We propose to leverage state augmentation technique by maximizing the number of topic samples to infinity, and then develop a new inference approach, called infinite latent state replication (ILR), to generate robust soft topic assignment for each given document-word pair. Experimental results on the publicly available datasets show that ILR outperforms CGS for inference of existing established topic models.
|
2405.05285
|
Jelena Pavlovic
|
Jelena Pavlovic (University of Belgrade, Faculty of Philosophy and
Koucing centar Resarch Lab), Jugoslav Krstic, Luka Mitrovic, Djordje Babic,
Adrijana Milosavljevic, Milena Nikolic, Tijana Karaklic and Tijana Mitrovic
(Koucing centar Research Lab)
|
Generative AI as a metacognitive agent: A comparative mixed-method study
with human participants on ICF-mimicking exam performance
| null | null | null | null |
cs.HC cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
This study investigates the metacognitive capabilities of Large Language
Models relative to human metacognition in the context of the International
Coaching Federation ICF mimicking exam, a situational judgment test related to
coaching competencies. Using a mixed method approach, we assessed the
metacognitive performance, including sensitivity, accuracy in probabilistic
predictions, and bias, of human participants and five advanced LLMs (GPT-4,
Claude-3-Opus 3, Mistral Large, Llama 3, and Gemini 1.5 Pro). The results
indicate that LLMs outperformed humans across all metacognitive metrics,
particularly in terms of reduced overconfidence, compared to humans. However,
both LLMs and humans showed less adaptability in ambiguous scenarios, adhering
closely to predefined decision frameworks. The study suggests that Generative
AI can effectively engage in human-like metacognitive processing without
conscious awareness. Implications of the study are discussed in relation to
development of AI simulators that scaffold cognitive and metacognitive aspects
of mastering coaching competencies. More broadly, implications of these results
are discussed in relation to development of metacognitive modules that lead
towards more autonomous and intuitive AI systems.
|
[
{
"created": "Tue, 7 May 2024 22:15:12 GMT",
"version": "v1"
}
] |
2024-05-10
|
[
[
"Pavlovic",
"Jelena",
"",
"University of Belgrade, Faculty of Philosophy and\n Koucing centar Resarch Lab"
],
[
"Krstic",
"Jugoslav",
"",
"Koucing centar Research Lab"
],
[
"Mitrovic",
"Luka",
"",
"Koucing centar Research Lab"
],
[
"Babic",
"Djordje",
"",
"Koucing centar Research Lab"
],
[
"Milosavljevic",
"Adrijana",
"",
"Koucing centar Research Lab"
],
[
"Nikolic",
"Milena",
"",
"Koucing centar Research Lab"
],
[
"Karaklic",
"Tijana",
"",
"Koucing centar Research Lab"
],
[
"Mitrovic",
"Tijana",
"",
"Koucing centar Research Lab"
]
] |
This study investigates the metacognitive capabilities of Large Language Models relative to human metacognition in the context of the International Coaching Federation ICF mimicking exam, a situational judgment test related to coaching competencies. Using a mixed method approach, we assessed the metacognitive performance, including sensitivity, accuracy in probabilistic predictions, and bias, of human participants and five advanced LLMs (GPT-4, Claude-3-Opus 3, Mistral Large, Llama 3, and Gemini 1.5 Pro). The results indicate that LLMs outperformed humans across all metacognitive metrics, particularly in terms of reduced overconfidence, compared to humans. However, both LLMs and humans showed less adaptability in ambiguous scenarios, adhering closely to predefined decision frameworks. The study suggests that Generative AI can effectively engage in human-like metacognitive processing without conscious awareness. Implications of the study are discussed in relation to development of AI simulators that scaffold cognitive and metacognitive aspects of mastering coaching competencies. More broadly, implications of these results are discussed in relation to development of metacognitive modules that lead towards more autonomous and intuitive AI systems.
|
2407.08947
|
Jeeyung Kim
|
Jeeyung Kim, Ze Wang and Qiang Qiu
|
Constructing Concept-based Models to Mitigate Spurious Correlations with
Minimal Human Effort
| null | null | null | null |
cs.LG cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Enhancing model interpretability can address spurious correlations by
revealing how models draw their predictions. Concept Bottleneck Models (CBMs)
can provide a principled way of disclosing and guiding model behaviors through
human-understandable concepts, albeit at a high cost of human efforts in data
annotation. In this paper, we leverage a synergy of multiple foundation models
to construct CBMs with nearly no human effort. We discover undesirable biases
in CBMs built on pre-trained models and propose a novel framework designed to
exploit pre-trained models while being immune to these biases, thereby reducing
vulnerability to spurious correlations. Specifically, our method offers a
seamless pipeline that adopts foundation models for assessing potential
spurious correlations in datasets, annotating concepts for images, and refining
the annotations for improved robustness. We evaluate the proposed method on
multiple datasets, and the results demonstrate its effectiveness in reducing
model reliance on spurious correlations while preserving its interpretability.
|
[
{
"created": "Fri, 12 Jul 2024 03:07:28 GMT",
"version": "v1"
}
] |
2024-07-15
|
[
[
"Kim",
"Jeeyung",
""
],
[
"Wang",
"Ze",
""
],
[
"Qiu",
"Qiang",
""
]
] |
Enhancing model interpretability can address spurious correlations by revealing how models draw their predictions. Concept Bottleneck Models (CBMs) can provide a principled way of disclosing and guiding model behaviors through human-understandable concepts, albeit at a high cost of human efforts in data annotation. In this paper, we leverage a synergy of multiple foundation models to construct CBMs with nearly no human effort. We discover undesirable biases in CBMs built on pre-trained models and propose a novel framework designed to exploit pre-trained models while being immune to these biases, thereby reducing vulnerability to spurious correlations. Specifically, our method offers a seamless pipeline that adopts foundation models for assessing potential spurious correlations in datasets, annotating concepts for images, and refining the annotations for improved robustness. We evaluate the proposed method on multiple datasets, and the results demonstrate its effectiveness in reducing model reliance on spurious correlations while preserving its interpretability.
|
2311.09137
|
Izak Yasrebi-De Kom
|
Izak Yasrebi-de Kom, Joanna Klopotowska, Dave Dongelmans, Nicolette De
Keizer, Kitty Jager, Ameen Abu-Hanna, Giovanni Cin\`a
|
Causal prediction models for medication safety monitoring: The diagnosis
of vancomycin-induced acute kidney injury
|
Extended Abstract presented at Machine Learning for Health (ML4H)
symposium 2023, December 10th, 2023, New Orleans, United States, 14 pages
| null | null | null |
cs.LG stat.ME
|
http://creativecommons.org/licenses/by/4.0/
|
The current best practice approach for the retrospective diagnosis of adverse
drug events (ADEs) in hospitalized patients relies on a full patient chart
review and a formal causality assessment by multiple medical experts. This
evaluation serves to qualitatively estimate the probability of causation (PC);
the probability that a drug was a necessary cause of an adverse event. This
practice is manual, resource intensive and prone to human biases, and may thus
benefit from data-driven decision support. Here, we pioneer a causal modeling
approach using observational data to estimate a lower bound of the PC
(PC$_{low}$). This method includes two key causal inference components: (1) the
target trial emulation framework and (2) estimation of individualized treatment
effects using machine learning. We apply our method to the clinically relevant
use-case of vancomycin-induced acute kidney injury in intensive care patients,
and compare our causal model-based PC$_{low}$ estimates to qualitative
estimates of the PC provided by a medical expert. Important limitations and
potential improvements are discussed, and we conclude that future improved
causal models could provide essential data-driven support for medication safety
monitoring in hospitalized patients.
|
[
{
"created": "Wed, 15 Nov 2023 17:29:24 GMT",
"version": "v1"
}
] |
2023-12-08
|
[
[
"Kom",
"Izak Yasrebi-de",
""
],
[
"Klopotowska",
"Joanna",
""
],
[
"Dongelmans",
"Dave",
""
],
[
"De Keizer",
"Nicolette",
""
],
[
"Jager",
"Kitty",
""
],
[
"Abu-Hanna",
"Ameen",
""
],
[
"Cinà",
"Giovanni",
""
]
] |
The current best practice approach for the retrospective diagnosis of adverse drug events (ADEs) in hospitalized patients relies on a full patient chart review and a formal causality assessment by multiple medical experts. This evaluation serves to qualitatively estimate the probability of causation (PC); the probability that a drug was a necessary cause of an adverse event. This practice is manual, resource intensive and prone to human biases, and may thus benefit from data-driven decision support. Here, we pioneer a causal modeling approach using observational data to estimate a lower bound of the PC (PC$_{low}$). This method includes two key causal inference components: (1) the target trial emulation framework and (2) estimation of individualized treatment effects using machine learning. We apply our method to the clinically relevant use-case of vancomycin-induced acute kidney injury in intensive care patients, and compare our causal model-based PC$_{low}$ estimates to qualitative estimates of the PC provided by a medical expert. Important limitations and potential improvements are discussed, and we conclude that future improved causal models could provide essential data-driven support for medication safety monitoring in hospitalized patients.
|
2102.12786
|
Nicolas Nicolaou
|
Antonio Fernandez Anta, Chryssis Georgiou, Theophanis Hadjistasi,
Nicolas Nicolaou, Efstathios Stavrakis, Andria Trigeorgi
|
Fragmented Objects: Boosting Concurrency of Shared Large Objects
| null | null | null | null |
cs.DC
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
This work examines strategies to handle large shared data objects in
distributed storage systems (DSS), while boosting the number of concurrent
accesses, maintaining strong consistency guarantees, and ensuring good
operation performance. To this respect, we define the notion of fragmented
objects:con-current objects composed of a list of fragments (or blocks) that
allow operations to manipulate each of their fragments individually. As the
fragments belong to the same object, it is not enough that each fragment is
linearizable to have useful consistency guarantees in the composed object.
Hence, we capture the consistency semantic of the whole object with the notion
of fragmented linearizability. Then, considering that a variance of
linearizability, coverability, is more suited for versioned objects like files,
we provide an implementation of a distributed file system, called COBFS, that
utilizes coverable fragmented objects (i.e., files).In COBFS, each file is a
linked-list of coverable block objects. Preliminary emulation of COBFS
demonstrates the potential of our approach in boosting the concurrency of
strongly consistent large objects.
|
[
{
"created": "Thu, 25 Feb 2021 11:17:41 GMT",
"version": "v1"
},
{
"created": "Sun, 7 Mar 2021 09:06:10 GMT",
"version": "v2"
}
] |
2021-03-09
|
[
[
"Anta",
"Antonio Fernandez",
""
],
[
"Georgiou",
"Chryssis",
""
],
[
"Hadjistasi",
"Theophanis",
""
],
[
"Nicolaou",
"Nicolas",
""
],
[
"Stavrakis",
"Efstathios",
""
],
[
"Trigeorgi",
"Andria",
""
]
] |
This work examines strategies to handle large shared data objects in distributed storage systems (DSS), while boosting the number of concurrent accesses, maintaining strong consistency guarantees, and ensuring good operation performance. To this respect, we define the notion of fragmented objects:con-current objects composed of a list of fragments (or blocks) that allow operations to manipulate each of their fragments individually. As the fragments belong to the same object, it is not enough that each fragment is linearizable to have useful consistency guarantees in the composed object. Hence, we capture the consistency semantic of the whole object with the notion of fragmented linearizability. Then, considering that a variance of linearizability, coverability, is more suited for versioned objects like files, we provide an implementation of a distributed file system, called COBFS, that utilizes coverable fragmented objects (i.e., files).In COBFS, each file is a linked-list of coverable block objects. Preliminary emulation of COBFS demonstrates the potential of our approach in boosting the concurrency of strongly consistent large objects.
|
2302.03236
|
Chuyang Ke
|
Chuyang Ke, Jean Honorio
|
Exact Inference in High-order Structured Prediction
| null |
International Conference on Machine Learning (ICML), 2023
| null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we study the problem of inference in high-order structured
prediction tasks. In the context of Markov random fields, the goal of a
high-order inference task is to maximize a score function on the space of
labels, and the score function can be decomposed into sum of unary and
high-order potentials. We apply a generative model approach to study the
problem of high-order inference, and provide a two-stage convex optimization
algorithm for exact label recovery. We also provide a new class of hypergraph
structural properties related to hyperedge expansion that drives the success in
general high-order inference problems. Finally, we connect the performance of
our algorithm and the hyperedge expansion property using a novel hypergraph
Cheeger-type inequality.
|
[
{
"created": "Tue, 7 Feb 2023 03:42:57 GMT",
"version": "v1"
}
] |
2023-10-23
|
[
[
"Ke",
"Chuyang",
""
],
[
"Honorio",
"Jean",
""
]
] |
In this paper, we study the problem of inference in high-order structured prediction tasks. In the context of Markov random fields, the goal of a high-order inference task is to maximize a score function on the space of labels, and the score function can be decomposed into sum of unary and high-order potentials. We apply a generative model approach to study the problem of high-order inference, and provide a two-stage convex optimization algorithm for exact label recovery. We also provide a new class of hypergraph structural properties related to hyperedge expansion that drives the success in general high-order inference problems. Finally, we connect the performance of our algorithm and the hyperedge expansion property using a novel hypergraph Cheeger-type inequality.
|
1902.08160
|
Maxime Gabella
|
Maxime Gabella
|
Topology of Learning in Artificial Neural Networks
|
8 pages, 8 figures
| null |
10.1109/TNNLS.2020.3015790
| null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Understanding how neural networks learn remains one of the central challenges
in machine learning research. From random at the start of training, the weights
of a neural network evolve in such a way as to be able to perform a variety of
tasks, like classifying images. Here we study the emergence of structure in the
weights by applying methods from topological data analysis. We train simple
feedforward neural networks on the MNIST dataset and monitor the evolution of
the weights. When initialized to zero, the weights follow trajectories that
branch off recurrently, thus generating trees that describe the growth of the
effective capacity of each layer. When initialized to tiny random values, the
weights evolve smoothly along two-dimensional surfaces. We show that natural
coordinates on these learning surfaces correspond to important factors of
variation.
|
[
{
"created": "Thu, 21 Feb 2019 17:48:45 GMT",
"version": "v1"
},
{
"created": "Thu, 23 May 2019 07:51:42 GMT",
"version": "v2"
},
{
"created": "Fri, 31 May 2019 08:16:12 GMT",
"version": "v3"
},
{
"created": "Tue, 27 Oct 2020 11:54:59 GMT",
"version": "v4"
}
] |
2020-10-28
|
[
[
"Gabella",
"Maxime",
""
]
] |
Understanding how neural networks learn remains one of the central challenges in machine learning research. From random at the start of training, the weights of a neural network evolve in such a way as to be able to perform a variety of tasks, like classifying images. Here we study the emergence of structure in the weights by applying methods from topological data analysis. We train simple feedforward neural networks on the MNIST dataset and monitor the evolution of the weights. When initialized to zero, the weights follow trajectories that branch off recurrently, thus generating trees that describe the growth of the effective capacity of each layer. When initialized to tiny random values, the weights evolve smoothly along two-dimensional surfaces. We show that natural coordinates on these learning surfaces correspond to important factors of variation.
|
2201.07906
|
Lee Kezar
|
Lee Kezar, Pei Zhou
|
The Role of Facial Expressions and Emotion in ASL
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
There is little prior work on quantifying the relationships between facial
expressions and emotionality in American Sign Language. In this final report,
we provide two methods for studying these relationships through probability and
prediction. Using a large corpus of natural signing manually annotated with
facial features paired with lexical emotion datasets, we find that there exist
many relationships between emotionality and the face, and that a simple
classifier can predict what someone is saying in terms of broad emotional
categories only by looking at the face.
|
[
{
"created": "Wed, 19 Jan 2022 23:11:48 GMT",
"version": "v1"
}
] |
2022-01-21
|
[
[
"Kezar",
"Lee",
""
],
[
"Zhou",
"Pei",
""
]
] |
There is little prior work on quantifying the relationships between facial expressions and emotionality in American Sign Language. In this final report, we provide two methods for studying these relationships through probability and prediction. Using a large corpus of natural signing manually annotated with facial features paired with lexical emotion datasets, we find that there exist many relationships between emotionality and the face, and that a simple classifier can predict what someone is saying in terms of broad emotional categories only by looking at the face.
|
2312.09670
|
Jesus Lovon
|
Jes\'us Lov\'on-Melgarejo, Jose G. Moreno, Romaric Besan\c{c}on,
Olivier Ferret, Lynda Tamine
|
Probing Pretrained Language Models with Hierarchy Properties
|
Accepted at ECIR 2024
| null | null | null |
cs.CL cs.IR
|
http://creativecommons.org/licenses/by/4.0/
|
Since Pretrained Language Models (PLMs) are the cornerstone of the most
recent Information Retrieval (IR) models, the way they encode semantic
knowledge is particularly important. However, little attention has been given
to studying the PLMs' capability to capture hierarchical semantic knowledge.
Traditionally, evaluating such knowledge encoded in PLMs relies on their
performance on a task-dependent evaluation approach based on proxy tasks, such
as hypernymy detection. Unfortunately, this approach potentially ignores other
implicit and complex taxonomic relations. In this work, we propose a
task-agnostic evaluation method able to evaluate to what extent PLMs can
capture complex taxonomy relations, such as ancestors and siblings. The
evaluation is based on intrinsic properties that capture the hierarchical
nature of taxonomies. Our experimental evaluation shows that the
lexico-semantic knowledge implicitly encoded in PLMs does not always capture
hierarchical relations. We further demonstrate that the proposed properties can
be injected into PLMs to improve their understanding of hierarchy. Through
evaluations on taxonomy reconstruction, hypernym discovery and reading
comprehension tasks, we show that the knowledge about hierarchy is moderately
but not systematically transferable across tasks.
|
[
{
"created": "Fri, 15 Dec 2023 10:31:36 GMT",
"version": "v1"
}
] |
2023-12-18
|
[
[
"Lovón-Melgarejo",
"Jesús",
""
],
[
"Moreno",
"Jose G.",
""
],
[
"Besançon",
"Romaric",
""
],
[
"Ferret",
"Olivier",
""
],
[
"Tamine",
"Lynda",
""
]
] |
Since Pretrained Language Models (PLMs) are the cornerstone of the most recent Information Retrieval (IR) models, the way they encode semantic knowledge is particularly important. However, little attention has been given to studying the PLMs' capability to capture hierarchical semantic knowledge. Traditionally, evaluating such knowledge encoded in PLMs relies on their performance on a task-dependent evaluation approach based on proxy tasks, such as hypernymy detection. Unfortunately, this approach potentially ignores other implicit and complex taxonomic relations. In this work, we propose a task-agnostic evaluation method able to evaluate to what extent PLMs can capture complex taxonomy relations, such as ancestors and siblings. The evaluation is based on intrinsic properties that capture the hierarchical nature of taxonomies. Our experimental evaluation shows that the lexico-semantic knowledge implicitly encoded in PLMs does not always capture hierarchical relations. We further demonstrate that the proposed properties can be injected into PLMs to improve their understanding of hierarchy. Through evaluations on taxonomy reconstruction, hypernym discovery and reading comprehension tasks, we show that the knowledge about hierarchy is moderately but not systematically transferable across tasks.
|
2308.09882
|
Jie Cheng
|
Jie Cheng, Xiaodong Mei and Ming Liu
|
Forecast-MAE: Self-supervised Pre-training for Motion Forecasting with
Masked Autoencoders
|
ICCV2023
| null | null | null |
cs.RO cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
This study explores the application of self-supervised learning (SSL) to the
task of motion forecasting, an area that has not yet been extensively
investigated despite the widespread success of SSL in computer vision and
natural language processing. To address this gap, we introduce Forecast-MAE, an
extension of the mask autoencoders framework that is specifically designed for
self-supervised learning of the motion forecasting task. Our approach includes
a novel masking strategy that leverages the strong interconnections between
agents' trajectories and road networks, involving complementary masking of
agents' future or history trajectories and random masking of lane segments. Our
experiments on the challenging Argoverse 2 motion forecasting benchmark show
that Forecast-MAE, which utilizes standard Transformer blocks with minimal
inductive bias, achieves competitive performance compared to state-of-the-art
methods that rely on supervised learning and sophisticated designs. Moreover,
it outperforms the previous self-supervised learning method by a significant
margin. Code is available at https://github.com/jchengai/forecast-mae.
|
[
{
"created": "Sat, 19 Aug 2023 02:27:51 GMT",
"version": "v1"
}
] |
2023-08-22
|
[
[
"Cheng",
"Jie",
""
],
[
"Mei",
"Xiaodong",
""
],
[
"Liu",
"Ming",
""
]
] |
This study explores the application of self-supervised learning (SSL) to the task of motion forecasting, an area that has not yet been extensively investigated despite the widespread success of SSL in computer vision and natural language processing. To address this gap, we introduce Forecast-MAE, an extension of the mask autoencoders framework that is specifically designed for self-supervised learning of the motion forecasting task. Our approach includes a novel masking strategy that leverages the strong interconnections between agents' trajectories and road networks, involving complementary masking of agents' future or history trajectories and random masking of lane segments. Our experiments on the challenging Argoverse 2 motion forecasting benchmark show that Forecast-MAE, which utilizes standard Transformer blocks with minimal inductive bias, achieves competitive performance compared to state-of-the-art methods that rely on supervised learning and sophisticated designs. Moreover, it outperforms the previous self-supervised learning method by a significant margin. Code is available at https://github.com/jchengai/forecast-mae.
|
1905.13271
|
Isaac Lage
|
Isaac Lage, Daphna Lifschitz, Finale Doshi-Velez, Ofra Amir
|
Exploring Computational User Models for Agent Policy Summarization
|
To appear at IJCAI 2019. 14 pages (incl references and appendix)
| null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
AI agents are being developed to support high stakes decision-making
processes from driving cars to prescribing drugs, making it increasingly
important for human users to understand their behavior. Policy summarization
methods aim to convey strengths and weaknesses of such agents by demonstrating
their behavior in a subset of informative states. Some policy summarization
methods extract a summary that optimizes the ability to reconstruct the agent's
policy under the assumption that users will deploy inverse reinforcement
learning. In this paper, we explore the use of different models for extracting
summaries. We introduce an imitation learning-based approach to policy
summarization; we demonstrate through computational simulations that a mismatch
between the model used to extract a summary and the model used to reconstruct
the policy results in worse reconstruction quality; and we demonstrate through
a human-subject study that people use different models to reconstruct policies
in different contexts, and that matching the summary extraction model to these
can improve performance. Together, our results suggest that it is important to
carefully consider user models in policy summarization.
|
[
{
"created": "Thu, 30 May 2019 19:32:46 GMT",
"version": "v1"
}
] |
2019-06-03
|
[
[
"Lage",
"Isaac",
""
],
[
"Lifschitz",
"Daphna",
""
],
[
"Doshi-Velez",
"Finale",
""
],
[
"Amir",
"Ofra",
""
]
] |
AI agents are being developed to support high stakes decision-making processes from driving cars to prescribing drugs, making it increasingly important for human users to understand their behavior. Policy summarization methods aim to convey strengths and weaknesses of such agents by demonstrating their behavior in a subset of informative states. Some policy summarization methods extract a summary that optimizes the ability to reconstruct the agent's policy under the assumption that users will deploy inverse reinforcement learning. In this paper, we explore the use of different models for extracting summaries. We introduce an imitation learning-based approach to policy summarization; we demonstrate through computational simulations that a mismatch between the model used to extract a summary and the model used to reconstruct the policy results in worse reconstruction quality; and we demonstrate through a human-subject study that people use different models to reconstruct policies in different contexts, and that matching the summary extraction model to these can improve performance. Together, our results suggest that it is important to carefully consider user models in policy summarization.
|
2111.10699
|
Nate Veldt
|
Nate Veldt
|
Correlation Clustering via Strong Triadic Closure Labeling: Fast
Approximation Algorithms and Practical Lower Bounds
|
ICML 2022
| null | null | null |
cs.DS cs.DM cs.LG cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Correlation clustering is a widely studied framework for clustering based on
pairwise similarity and dissimilarity scores, but its best approximation
algorithms rely on impractical linear programming relaxations. We present
faster approximation algorithms that avoid these relaxations, for two
well-studied special cases: cluster editing and cluster deletion. We accomplish
this by drawing new connections to edge labeling problems related to the
principle of strong triadic closure. This leads to faster and more practical
linear programming algorithms, as well as extremely scalable combinatorial
techniques, including the first combinatorial approximation algorithm for
cluster deletion. In practice, our algorithms produce approximate solutions
that nearly match the best algorithms in quality, while scaling to problems
that are orders of magnitude larger.
|
[
{
"created": "Sat, 20 Nov 2021 22:47:19 GMT",
"version": "v1"
},
{
"created": "Thu, 23 Jun 2022 19:36:15 GMT",
"version": "v2"
}
] |
2022-06-27
|
[
[
"Veldt",
"Nate",
""
]
] |
Correlation clustering is a widely studied framework for clustering based on pairwise similarity and dissimilarity scores, but its best approximation algorithms rely on impractical linear programming relaxations. We present faster approximation algorithms that avoid these relaxations, for two well-studied special cases: cluster editing and cluster deletion. We accomplish this by drawing new connections to edge labeling problems related to the principle of strong triadic closure. This leads to faster and more practical linear programming algorithms, as well as extremely scalable combinatorial techniques, including the first combinatorial approximation algorithm for cluster deletion. In practice, our algorithms produce approximate solutions that nearly match the best algorithms in quality, while scaling to problems that are orders of magnitude larger.
|
2407.12094
|
Minh Nguyen
|
Minh Nguyen, Franck Dernoncourt, Seunghyun Yoon, Hanieh Deilamsalehy,
Hao Tan, Ryan Rossi, Quan Hung Tran, Trung Bui, Thien Huu Nguyen
|
Identifying Speakers in Dialogue Transcripts: A Text-based Approach
Using Pretrained Language Models
|
accepted to INTERSPEECH 2024
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-sa/4.0/
|
We introduce an approach to identifying speaker names in dialogue
transcripts, a crucial task for enhancing content accessibility and
searchability in digital media archives. Despite the advancements in speech
recognition, the task of text-based speaker identification (SpeakerID) has
received limited attention, lacking large-scale, diverse datasets for effective
model training. Addressing these gaps, we present a novel, large-scale dataset
derived from the MediaSum corpus, encompassing transcripts from a wide range of
media sources. We propose novel transformer-based models tailored for
SpeakerID, leveraging contextual cues within dialogues to accurately attribute
speaker names. Through extensive experiments, our best model achieves a great
precision of 80.3\%, setting a new benchmark for SpeakerID. The data and code
are publicly available here:
\url{https://github.com/adobe-research/speaker-identification}
|
[
{
"created": "Tue, 16 Jul 2024 18:03:58 GMT",
"version": "v1"
}
] |
2024-07-18
|
[
[
"Nguyen",
"Minh",
""
],
[
"Dernoncourt",
"Franck",
""
],
[
"Yoon",
"Seunghyun",
""
],
[
"Deilamsalehy",
"Hanieh",
""
],
[
"Tan",
"Hao",
""
],
[
"Rossi",
"Ryan",
""
],
[
"Tran",
"Quan Hung",
""
],
[
"Bui",
"Trung",
""
],
[
"Nguyen",
"Thien Huu",
""
]
] |
We introduce an approach to identifying speaker names in dialogue transcripts, a crucial task for enhancing content accessibility and searchability in digital media archives. Despite the advancements in speech recognition, the task of text-based speaker identification (SpeakerID) has received limited attention, lacking large-scale, diverse datasets for effective model training. Addressing these gaps, we present a novel, large-scale dataset derived from the MediaSum corpus, encompassing transcripts from a wide range of media sources. We propose novel transformer-based models tailored for SpeakerID, leveraging contextual cues within dialogues to accurately attribute speaker names. Through extensive experiments, our best model achieves a great precision of 80.3\%, setting a new benchmark for SpeakerID. The data and code are publicly available here: \url{https://github.com/adobe-research/speaker-identification}
|
2212.01714
|
Aditi Agrawal
|
Aditi Agrawal, Benjamin Reed
|
A survey on grading format of automated grading tools for programming
assignments
| null |
15th annual International Conference of Education, Research and
Innovation (2022) 7506-7514
|
10.21125/iceri.2022.1912
| null |
cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The prevalence of online platforms and studies has generated the demand for
automated grading tools, and as a result, there are plenty in the market. Such
tools are developed to grade coding assignments quickly, accurately, and
effortlessly. Since there are varieties of tools available to cater to the
diverse options of programming languages and concepts, it is overwhelming for
any instructor to decide which one suits one's requirements. There are several
surveys studying the tools and giving insights into how they function and what
they support. However other than knowing the functionality, it is important for
an instructor to know how the assignments are graded and what is the format of
the test cases. This is crucial since the instructor has to design the grading
format and therefore requires a learning curve. This survey studies and
evaluates the automated grading tools based on their evaluation format. This in
turn helps a reader in deciding which tool to choose and provides an insight
into what are the assessment settings and approaches used in grading the coding
assignment in any specific grading tool.
|
[
{
"created": "Sun, 4 Dec 2022 00:49:16 GMT",
"version": "v1"
}
] |
2022-12-06
|
[
[
"Agrawal",
"Aditi",
""
],
[
"Reed",
"Benjamin",
""
]
] |
The prevalence of online platforms and studies has generated the demand for automated grading tools, and as a result, there are plenty in the market. Such tools are developed to grade coding assignments quickly, accurately, and effortlessly. Since there are varieties of tools available to cater to the diverse options of programming languages and concepts, it is overwhelming for any instructor to decide which one suits one's requirements. There are several surveys studying the tools and giving insights into how they function and what they support. However other than knowing the functionality, it is important for an instructor to know how the assignments are graded and what is the format of the test cases. This is crucial since the instructor has to design the grading format and therefore requires a learning curve. This survey studies and evaluates the automated grading tools based on their evaluation format. This in turn helps a reader in deciding which tool to choose and provides an insight into what are the assessment settings and approaches used in grading the coding assignment in any specific grading tool.
|
2109.04226
|
Jing Dong
|
Jing Dong, Shuai Li, Baoxiang Wang
|
Incentivizing an Unknown Crowd
| null | null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Motivated by the common strategic activities in crowdsourcing labeling, we
study the problem of sequential eliciting information without verification
(EIWV) for workers with a heterogeneous and unknown crowd. We propose a
reinforcement learning-based approach that is effective against a wide range of
settings including potential irrationality and collusion among workers. With
the aid of a costly oracle and the inference method, our approach dynamically
decides the oracle calls and gains robustness even under the presence of
frequent collusion activities. Extensive experiments show the advantage of our
approach. Our results also present the first comprehensive experiments of EIWV
on large-scale real datasets and the first thorough study of the effects of
environmental variables.
|
[
{
"created": "Thu, 9 Sep 2021 12:42:26 GMT",
"version": "v1"
}
] |
2021-09-10
|
[
[
"Dong",
"Jing",
""
],
[
"Li",
"Shuai",
""
],
[
"Wang",
"Baoxiang",
""
]
] |
Motivated by the common strategic activities in crowdsourcing labeling, we study the problem of sequential eliciting information without verification (EIWV) for workers with a heterogeneous and unknown crowd. We propose a reinforcement learning-based approach that is effective against a wide range of settings including potential irrationality and collusion among workers. With the aid of a costly oracle and the inference method, our approach dynamically decides the oracle calls and gains robustness even under the presence of frequent collusion activities. Extensive experiments show the advantage of our approach. Our results also present the first comprehensive experiments of EIWV on large-scale real datasets and the first thorough study of the effects of environmental variables.
|
2401.17244
|
Yuan Chiang
|
Yuan Chiang, Elvis Hsieh, Chia-Hong Chou, Janosh Riebesell
|
LLaMP: Large Language Model Made Powerful for High-fidelity Materials
Knowledge Retrieval and Distillation
|
31 pages, 5 figures
| null | null | null |
cs.CL cond-mat.mtrl-sci cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Reducing hallucination of Large Language Models (LLMs) is imperative for use
in the sciences, where reliability and reproducibility are crucial. However,
LLMs inherently lack long-term memory, making it a nontrivial, ad hoc, and
inevitably biased task to fine-tune them on domain-specific literature and
data. Here we introduce LLaMP, a multimodal retrieval-augmented generation
(RAG) framework of hierarchical reasoning-and-acting (ReAct) agents that can
dynamically and recursively interact with computational and experimental data
on Materials Project (MP) and run atomistic simulations via high-throughput
workflow interface. Without fine-tuning, LLaMP demonstrates strong tool usage
ability to comprehend and integrate various modalities of materials science
concepts, fetch relevant data stores on the fly, process higher-order data
(such as crystal structure and elastic tensor), and streamline complex tasks in
computational materials and chemistry. We propose a simple metric combining
uncertainty and confidence estimates to evaluate the self-consistency of
responses by LLaMP and vanilla LLMs. Our benchmark shows that LLaMP effectively
mitigates the intrinsic bias in LLMs, counteracting the errors on bulk moduli,
electronic bandgaps, and formation energies that seem to derive from mixed data
sources. We also demonstrate LLaMP's capability to edit crystal structures and
run annealing molecular dynamics simulations using pre-trained machine-learning
force fields. The framework offers an intuitive and nearly hallucination-free
approach to exploring and scaling materials informatics, and establishes a
pathway for knowledge distillation and fine-tuning other language models. Code
and live demo are available at https://github.com/chiang-yuan/llamp
|
[
{
"created": "Tue, 30 Jan 2024 18:37:45 GMT",
"version": "v1"
},
{
"created": "Sun, 2 Jun 2024 07:50:21 GMT",
"version": "v2"
}
] |
2024-06-04
|
[
[
"Chiang",
"Yuan",
""
],
[
"Hsieh",
"Elvis",
""
],
[
"Chou",
"Chia-Hong",
""
],
[
"Riebesell",
"Janosh",
""
]
] |
Reducing hallucination of Large Language Models (LLMs) is imperative for use in the sciences, where reliability and reproducibility are crucial. However, LLMs inherently lack long-term memory, making it a nontrivial, ad hoc, and inevitably biased task to fine-tune them on domain-specific literature and data. Here we introduce LLaMP, a multimodal retrieval-augmented generation (RAG) framework of hierarchical reasoning-and-acting (ReAct) agents that can dynamically and recursively interact with computational and experimental data on Materials Project (MP) and run atomistic simulations via high-throughput workflow interface. Without fine-tuning, LLaMP demonstrates strong tool usage ability to comprehend and integrate various modalities of materials science concepts, fetch relevant data stores on the fly, process higher-order data (such as crystal structure and elastic tensor), and streamline complex tasks in computational materials and chemistry. We propose a simple metric combining uncertainty and confidence estimates to evaluate the self-consistency of responses by LLaMP and vanilla LLMs. Our benchmark shows that LLaMP effectively mitigates the intrinsic bias in LLMs, counteracting the errors on bulk moduli, electronic bandgaps, and formation energies that seem to derive from mixed data sources. We also demonstrate LLaMP's capability to edit crystal structures and run annealing molecular dynamics simulations using pre-trained machine-learning force fields. The framework offers an intuitive and nearly hallucination-free approach to exploring and scaling materials informatics, and establishes a pathway for knowledge distillation and fine-tuning other language models. Code and live demo are available at https://github.com/chiang-yuan/llamp
|
1907.12172
|
Amal Gunatilake
|
Amal Gunatilake, Lasitha Piyathilaka, Sarath Kodagoda, Stephen
Barclay, Dammika Vitanage
|
Real-Time 3D Profiling with RGB-D Mapping in Pipelines Using Stereo
Camera Vision and Structured IR Laser Ring
|
6 pages, 14 figures, ICIEA 2019 conference paper, Robotics
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper is focused on delivering a solution that can scan and reconstruct
the 3D profile of a pipeline in real-time using a crawler robot. A structured
infrared (IR) laser ring projector and a stereo camera system are used to
generate the 3D profile of the pipe as the robot moves inside the pipe. The
proposed stereo system does not require field calibrations and it is not
affected by the lateral movement of the robot, hence capable of producing an
accurate 3D map. The wavelength of the IR light source is chosen to be non
overlapping with the visible spectrum of the color camera. Hence RGB color
values of the depth can be obtained by projecting the 3D map into the color
image frame. The proposed system is implemented in Robotic Operating System
(ROS) producing real-time RGB-D maps with defects. The defect map exploit
differences in ovality enabling real-time identification of structural defects
such as surface corrosion in pipe infrastructure. The lab experiments showed
the proposed laser profiling system can detect ovality changes of the pipe with
millimeter level of accuracy and resolution.
|
[
{
"created": "Mon, 29 Jul 2019 01:50:52 GMT",
"version": "v1"
}
] |
2019-07-30
|
[
[
"Gunatilake",
"Amal",
""
],
[
"Piyathilaka",
"Lasitha",
""
],
[
"Kodagoda",
"Sarath",
""
],
[
"Barclay",
"Stephen",
""
],
[
"Vitanage",
"Dammika",
""
]
] |
This paper is focused on delivering a solution that can scan and reconstruct the 3D profile of a pipeline in real-time using a crawler robot. A structured infrared (IR) laser ring projector and a stereo camera system are used to generate the 3D profile of the pipe as the robot moves inside the pipe. The proposed stereo system does not require field calibrations and it is not affected by the lateral movement of the robot, hence capable of producing an accurate 3D map. The wavelength of the IR light source is chosen to be non overlapping with the visible spectrum of the color camera. Hence RGB color values of the depth can be obtained by projecting the 3D map into the color image frame. The proposed system is implemented in Robotic Operating System (ROS) producing real-time RGB-D maps with defects. The defect map exploit differences in ovality enabling real-time identification of structural defects such as surface corrosion in pipe infrastructure. The lab experiments showed the proposed laser profiling system can detect ovality changes of the pipe with millimeter level of accuracy and resolution.
|
1609.08442
|
Lantian Li Mr.
|
Lantian Li, Zhiyuan Tang, Dong Wang, Andrew Abel, Yang Feng, Shiyue
Zhang
|
Collaborative Learning for Language and Speaker Recognition
| null | null | null | null |
cs.SD cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents a unified model to perform language and speaker
recognition simultaneously and altogether. The model is based on a multi-task
recurrent neural network where the output of one task is fed as the input of
the other, leading to a collaborative learning framework that can improve both
language and speaker recognition by borrowing information from each other. Our
experiments demonstrated that the multi-task model outperforms the
task-specific models on both tasks.
|
[
{
"created": "Tue, 27 Sep 2016 13:48:01 GMT",
"version": "v1"
},
{
"created": "Tue, 23 May 2017 09:56:54 GMT",
"version": "v2"
}
] |
2017-05-24
|
[
[
"Li",
"Lantian",
""
],
[
"Tang",
"Zhiyuan",
""
],
[
"Wang",
"Dong",
""
],
[
"Abel",
"Andrew",
""
],
[
"Feng",
"Yang",
""
],
[
"Zhang",
"Shiyue",
""
]
] |
This paper presents a unified model to perform language and speaker recognition simultaneously and altogether. The model is based on a multi-task recurrent neural network where the output of one task is fed as the input of the other, leading to a collaborative learning framework that can improve both language and speaker recognition by borrowing information from each other. Our experiments demonstrated that the multi-task model outperforms the task-specific models on both tasks.
|
1707.04412
|
Alon Talmor
|
Alon Talmor, Mor Geva, Jonathan Berant
|
Evaluating Semantic Parsing against a Simple Web-based Question
Answering Model
|
*sem 2017
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Semantic parsing shines at analyzing complex natural language that involves
composition and computation over multiple pieces of evidence. However, datasets
for semantic parsing contain many factoid questions that can be answered from a
single web document. In this paper, we propose to evaluate semantic
parsing-based question answering models by comparing them to a question
answering baseline that queries the web and extracts the answer only from web
snippets, without access to the target knowledge-base. We investigate this
approach on COMPLEXQUESTIONS, a dataset designed to focus on compositional
language, and find that our model obtains reasonable performance (35 F1
compared to 41 F1 of state-of-the-art). We find in our analysis that our model
performs well on complex questions involving conjunctions, but struggles on
questions that involve relation composition and superlatives.
|
[
{
"created": "Fri, 14 Jul 2017 08:25:36 GMT",
"version": "v1"
}
] |
2017-07-17
|
[
[
"Talmor",
"Alon",
""
],
[
"Geva",
"Mor",
""
],
[
"Berant",
"Jonathan",
""
]
] |
Semantic parsing shines at analyzing complex natural language that involves composition and computation over multiple pieces of evidence. However, datasets for semantic parsing contain many factoid questions that can be answered from a single web document. In this paper, we propose to evaluate semantic parsing-based question answering models by comparing them to a question answering baseline that queries the web and extracts the answer only from web snippets, without access to the target knowledge-base. We investigate this approach on COMPLEXQUESTIONS, a dataset designed to focus on compositional language, and find that our model obtains reasonable performance (35 F1 compared to 41 F1 of state-of-the-art). We find in our analysis that our model performs well on complex questions involving conjunctions, but struggles on questions that involve relation composition and superlatives.
|
2310.00819
|
Ziqi Wang
|
Tianci Xue, Ziqi Wang, Heng Ji
|
Parameter-Efficient Tuning Helps Language Model Alignment
|
21 pages, 11 figures, 5 tables
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Aligning large language models (LLMs) with human preferences is essential for
safe and useful LLMs. Previous works mainly adopt reinforcement learning (RLHF)
and direct preference optimization (DPO) with human feedback for alignment.
Nevertheless, they have certain drawbacks. One such limitation is that they can
only align models with one preference at the training time (e.g., they cannot
learn to generate concise responses when the preference data prefers detailed
responses), or have certain constraints for the data format (e.g., DPO only
supports pairwise preference data). To this end, prior works incorporate
controllable generations for alignment to make language models learn multiple
preferences and provide outputs with different preferences during inference if
asked. Controllable generation also offers more flexibility with regard to data
format (e.g., it supports pointwise preference data). Specifically, it uses
different control tokens for different preferences during training and
inference, making LLMs behave differently when required. Current controllable
generation methods either use a special token or hand-crafted prompts as
control tokens, and optimize them together with LLMs. As control tokens are
typically much lighter than LLMs, this optimization strategy may not
effectively optimize control tokens. To this end, we first use
parameter-efficient tuning (e.g., prompting tuning and low-rank adaptation) to
optimize control tokens and then fine-tune models for controllable generations,
similar to prior works. Our approach, alignMEnt with parameter-Efficient Tuning
(MEET), improves the quality of control tokens, thus improving controllable
generation quality consistently by an apparent margin on two well-recognized
datasets compared with prior works.
|
[
{
"created": "Sun, 1 Oct 2023 23:27:14 GMT",
"version": "v1"
}
] |
2023-10-03
|
[
[
"Xue",
"Tianci",
""
],
[
"Wang",
"Ziqi",
""
],
[
"Ji",
"Heng",
""
]
] |
Aligning large language models (LLMs) with human preferences is essential for safe and useful LLMs. Previous works mainly adopt reinforcement learning (RLHF) and direct preference optimization (DPO) with human feedback for alignment. Nevertheless, they have certain drawbacks. One such limitation is that they can only align models with one preference at the training time (e.g., they cannot learn to generate concise responses when the preference data prefers detailed responses), or have certain constraints for the data format (e.g., DPO only supports pairwise preference data). To this end, prior works incorporate controllable generations for alignment to make language models learn multiple preferences and provide outputs with different preferences during inference if asked. Controllable generation also offers more flexibility with regard to data format (e.g., it supports pointwise preference data). Specifically, it uses different control tokens for different preferences during training and inference, making LLMs behave differently when required. Current controllable generation methods either use a special token or hand-crafted prompts as control tokens, and optimize them together with LLMs. As control tokens are typically much lighter than LLMs, this optimization strategy may not effectively optimize control tokens. To this end, we first use parameter-efficient tuning (e.g., prompting tuning and low-rank adaptation) to optimize control tokens and then fine-tune models for controllable generations, similar to prior works. Our approach, alignMEnt with parameter-Efficient Tuning (MEET), improves the quality of control tokens, thus improving controllable generation quality consistently by an apparent margin on two well-recognized datasets compared with prior works.
|
2407.18902
|
Haozhi Qi
|
Jun Wang, Ying Yuan, Haichuan Che, Haozhi Qi, Yi Ma, Jitendra Malik,
Xiaolong Wang
|
Lessons from Learning to Spin "Pens"
|
Website: https://penspin.github.io/
| null | null | null |
cs.RO cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In-hand manipulation of pen-like objects is an important skill in our daily
lives, as many tools such as hammers and screwdrivers are similarly shaped.
However, current learning-based methods struggle with this task due to a lack
of high-quality demonstrations and the significant gap between simulation and
the real world. In this work, we push the boundaries of learning-based in-hand
manipulation systems by demonstrating the capability to spin pen-like objects.
We first use reinforcement learning to train an oracle policy with privileged
information and generate a high-fidelity trajectory dataset in simulation. This
serves two purposes: 1) pre-training a sensorimotor policy in simulation; 2)
conducting open-loop trajectory replay in the real world. We then fine-tune the
sensorimotor policy using these real-world trajectories to adapt it to the real
world dynamics. With less than 50 trajectories, our policy learns to rotate
more than ten pen-like objects with different physical properties for multiple
revolutions. We present a comprehensive analysis of our design choices and
share the lessons learned during development.
|
[
{
"created": "Fri, 26 Jul 2024 17:56:01 GMT",
"version": "v1"
}
] |
2024-07-29
|
[
[
"Wang",
"Jun",
""
],
[
"Yuan",
"Ying",
""
],
[
"Che",
"Haichuan",
""
],
[
"Qi",
"Haozhi",
""
],
[
"Ma",
"Yi",
""
],
[
"Malik",
"Jitendra",
""
],
[
"Wang",
"Xiaolong",
""
]
] |
In-hand manipulation of pen-like objects is an important skill in our daily lives, as many tools such as hammers and screwdrivers are similarly shaped. However, current learning-based methods struggle with this task due to a lack of high-quality demonstrations and the significant gap between simulation and the real world. In this work, we push the boundaries of learning-based in-hand manipulation systems by demonstrating the capability to spin pen-like objects. We first use reinforcement learning to train an oracle policy with privileged information and generate a high-fidelity trajectory dataset in simulation. This serves two purposes: 1) pre-training a sensorimotor policy in simulation; 2) conducting open-loop trajectory replay in the real world. We then fine-tune the sensorimotor policy using these real-world trajectories to adapt it to the real world dynamics. With less than 50 trajectories, our policy learns to rotate more than ten pen-like objects with different physical properties for multiple revolutions. We present a comprehensive analysis of our design choices and share the lessons learned during development.
|
1906.00890
|
Mauro Franceschelli
|
Mauro Franceschelli and Paolo Frasca
|
Stability of Open Multi-Agent Systems and Applications to Dynamic
Consensus
| null | null | null | null |
cs.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this technical note we consider a class of multi-agent network systems
that we refer to as Open Multi-Agent Systems (OMAS): in these multi-agent
systems, an indefinite number of agents may join or leave the network at any
time. Focusing on discrete-time evolutions of scalar agents, we provide a novel
theoretical framework to study the dynamical properties of OMAS: specifically,
we propose a suitable notion of stability and derive sufficient conditions to
ensure stability in this sense. These sufficient conditions regard the
arrival/departure of an agent as a disturbance: consistently, they require the
effect of arrivals/departures to be bounded (in a precise sense) and the OMAS
to be contractive in the absence of arrivals/departures. In order to provide an
example of application for this theory, we re-formulate the well-known
Proportional Dynamic Consensus for Open Multi-Agent Systems and we characterize
the stability properties of the resulting Open Proportional Dynamic Consensus
algorithm.
|
[
{
"created": "Mon, 3 Jun 2019 15:48:40 GMT",
"version": "v1"
}
] |
2019-06-04
|
[
[
"Franceschelli",
"Mauro",
""
],
[
"Frasca",
"Paolo",
""
]
] |
In this technical note we consider a class of multi-agent network systems that we refer to as Open Multi-Agent Systems (OMAS): in these multi-agent systems, an indefinite number of agents may join or leave the network at any time. Focusing on discrete-time evolutions of scalar agents, we provide a novel theoretical framework to study the dynamical properties of OMAS: specifically, we propose a suitable notion of stability and derive sufficient conditions to ensure stability in this sense. These sufficient conditions regard the arrival/departure of an agent as a disturbance: consistently, they require the effect of arrivals/departures to be bounded (in a precise sense) and the OMAS to be contractive in the absence of arrivals/departures. In order to provide an example of application for this theory, we re-formulate the well-known Proportional Dynamic Consensus for Open Multi-Agent Systems and we characterize the stability properties of the resulting Open Proportional Dynamic Consensus algorithm.
|
2405.07845
|
Zhenguo Gao Prof.
|
Shulei Qu, Zhenguo Gao, Xiaowei Chen, Na Li, Yakai Wang, Xiaoxiao Wu
|
Multi-Task Learning for Fatigue Detection and Face Recognition of
Drivers via Tree-Style Space-Channel Attention Fusion Network
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
In driving scenarios, automobile active safety systems are increasingly
incorporating deep learning technology. These systems typically need to handle
multiple tasks simultaneously, such as detecting fatigue driving and
recognizing the driver's identity. However, the traditional parallel-style
approach of combining multiple single-task models tends to waste resources when
dealing with similar tasks. Therefore, we propose a novel tree-style multi-task
modeling approach for multi-task learning, which rooted at a shared backbone,
more dedicated separate module branches are appended as the model pipeline goes
deeper. Following the tree-style approach, we propose a multi-task learning
model for simultaneously performing driver fatigue detection and face
recognition for identifying a driver. This model shares a common feature
extraction backbone module, with further separated feature extraction and
classification module branches. The dedicated branches exploit and combine
spatial and channel attention mechanisms to generate space-channel
fused-attention enhanced features, leading to improved detection performance.
As only single-task datasets are available, we introduce techniques including
alternating updation and gradient accumulation for training our multi-task
model using only the single-task datasets. The effectiveness of our tree-style
multi-task learning model is verified through extensive validations.
|
[
{
"created": "Mon, 13 May 2024 15:34:20 GMT",
"version": "v1"
}
] |
2024-05-14
|
[
[
"Qu",
"Shulei",
""
],
[
"Gao",
"Zhenguo",
""
],
[
"Chen",
"Xiaowei",
""
],
[
"Li",
"Na",
""
],
[
"Wang",
"Yakai",
""
],
[
"Wu",
"Xiaoxiao",
""
]
] |
In driving scenarios, automobile active safety systems are increasingly incorporating deep learning technology. These systems typically need to handle multiple tasks simultaneously, such as detecting fatigue driving and recognizing the driver's identity. However, the traditional parallel-style approach of combining multiple single-task models tends to waste resources when dealing with similar tasks. Therefore, we propose a novel tree-style multi-task modeling approach for multi-task learning, which rooted at a shared backbone, more dedicated separate module branches are appended as the model pipeline goes deeper. Following the tree-style approach, we propose a multi-task learning model for simultaneously performing driver fatigue detection and face recognition for identifying a driver. This model shares a common feature extraction backbone module, with further separated feature extraction and classification module branches. The dedicated branches exploit and combine spatial and channel attention mechanisms to generate space-channel fused-attention enhanced features, leading to improved detection performance. As only single-task datasets are available, we introduce techniques including alternating updation and gradient accumulation for training our multi-task model using only the single-task datasets. The effectiveness of our tree-style multi-task learning model is verified through extensive validations.
|
1704.00537
|
Ahmad Nauman Ghazi
|
Ahmad Nauman Ghazi, Kai Petersen, Elizabeth Bjarnason, Per Runeson
|
Exploratory Testing: One Size Doesn't Fit All
|
Submitted to IEEE Software
| null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Exploratory testing (ET) is a powerful and efficient way of testing software
by integrating design, execution, and analysis of tests during a testing
session. ET is often contrasted with scripted testing, and seen as a choice
between black and white. We pose that there are different levels of exploratory
testing from fully exploratory to fully scripted and propose a scale for the
degree of exploration for ET. The degree is defined through levels of ET, which
correspond to the way test charters are formulated. We have evaluated the
classification through focus groups at four companies and identified factors
that influence the level of exploratory testing. The results show that the
proposed ET levels have distinguishing characteristics and that the levels can
be used as a guide to structure test charters. Our study also indicates that
applying a combination of ET levels can be beneficial in achieving effective
testing.
|
[
{
"created": "Mon, 3 Apr 2017 11:46:55 GMT",
"version": "v1"
}
] |
2017-04-04
|
[
[
"Ghazi",
"Ahmad Nauman",
""
],
[
"Petersen",
"Kai",
""
],
[
"Bjarnason",
"Elizabeth",
""
],
[
"Runeson",
"Per",
""
]
] |
Exploratory testing (ET) is a powerful and efficient way of testing software by integrating design, execution, and analysis of tests during a testing session. ET is often contrasted with scripted testing, and seen as a choice between black and white. We pose that there are different levels of exploratory testing from fully exploratory to fully scripted and propose a scale for the degree of exploration for ET. The degree is defined through levels of ET, which correspond to the way test charters are formulated. We have evaluated the classification through focus groups at four companies and identified factors that influence the level of exploratory testing. The results show that the proposed ET levels have distinguishing characteristics and that the levels can be used as a guide to structure test charters. Our study also indicates that applying a combination of ET levels can be beneficial in achieving effective testing.
|
2102.09256
|
Elias Rohrer
|
Kimberly Lange, Elias Rohrer, Florian Tschorsch
|
On the Impact of Attachment Strategies for Payment Channel Networks
|
To be published in conjunction with the 2021 IEEE International
Conference on Blockchain and Cryptocurrency (ICBC)
| null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Payment channel networks, such as Bitcoin's Lightning Network, promise to
improve the scalability of blockchain systems by processing the majority of
transactions off-chain. Due to the design, the positioning of nodes in the
network topology is a highly influential factor regarding the experienced
performance, costs, and fee revenue of network participants. As a consequence,
today's Lightning Network is built around a small number of highly-connected
hubs. Recent literature shows the centralizing tendencies to be
incentive-compatible and at the same time detrimental to security and privacy.
The choice of attachment strategies therefore becomes a crucial factor for the
future of such systems. In this paper, we provide an empirical study on the
(local and global) impact of various attachment strategies for payment channel
networks. To this end, we introduce candidate strategies from the field of
graph theory and analyze them with respect to their computational complexity as
well as their repercussions for end users and service providers. Moreover, we
evaluate their long-term impact on the network topology.
|
[
{
"created": "Thu, 18 Feb 2021 10:27:59 GMT",
"version": "v1"
}
] |
2021-02-19
|
[
[
"Lange",
"Kimberly",
""
],
[
"Rohrer",
"Elias",
""
],
[
"Tschorsch",
"Florian",
""
]
] |
Payment channel networks, such as Bitcoin's Lightning Network, promise to improve the scalability of blockchain systems by processing the majority of transactions off-chain. Due to the design, the positioning of nodes in the network topology is a highly influential factor regarding the experienced performance, costs, and fee revenue of network participants. As a consequence, today's Lightning Network is built around a small number of highly-connected hubs. Recent literature shows the centralizing tendencies to be incentive-compatible and at the same time detrimental to security and privacy. The choice of attachment strategies therefore becomes a crucial factor for the future of such systems. In this paper, we provide an empirical study on the (local and global) impact of various attachment strategies for payment channel networks. To this end, we introduce candidate strategies from the field of graph theory and analyze them with respect to their computational complexity as well as their repercussions for end users and service providers. Moreover, we evaluate their long-term impact on the network topology.
|
1704.02083
|
Li Sulimowicz Mrs.
|
Li Sulimowicz, Ishfaq Ahmad
|
"RAPID" Regions-of-Interest Detection In Big Histopathological Images
|
6 pages, 5 figures, ICME conference
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The sheer volume and size of histopathological images (e.g.,10^6 MPixel)
underscores the need for faster and more accurate Regions-of-interest (ROI)
detection algorithms. In this paper, we propose such an algorithm, which has
four main components that help achieve greater accuracy and faster speed:
First, while using coarse-to-fine topology preserving segmentation as the
baseline, the proposed algorithm uses a superpixel regularity optimization
scheme for avoiding irregular and extremely small superpixels. Second, the
proposed technique employs a prediction strategy to focus only on important
superpixels at finer image levels. Third, the algorithm reuses the information
gained from the coarsest image level at other finer image levels. Both the
second and the third components drastically lower the complexity. Fourth, the
algorithm employs a highly effective parallelization scheme using adap- tive
data partitioning, which gains high speedup. Experimental results, conducted on
the BSD500 [1] and 500 whole-slide histological images from the National Lung
Screening Trial (NLST)1 dataset, confirm that the proposed algorithm gained 13
times speedup compared with the baseline, and around 160 times compared with
SLIC [11], without losing accuracy.
|
[
{
"created": "Fri, 7 Apr 2017 03:34:40 GMT",
"version": "v1"
}
] |
2017-04-10
|
[
[
"Sulimowicz",
"Li",
""
],
[
"Ahmad",
"Ishfaq",
""
]
] |
The sheer volume and size of histopathological images (e.g.,10^6 MPixel) underscores the need for faster and more accurate Regions-of-interest (ROI) detection algorithms. In this paper, we propose such an algorithm, which has four main components that help achieve greater accuracy and faster speed: First, while using coarse-to-fine topology preserving segmentation as the baseline, the proposed algorithm uses a superpixel regularity optimization scheme for avoiding irregular and extremely small superpixels. Second, the proposed technique employs a prediction strategy to focus only on important superpixels at finer image levels. Third, the algorithm reuses the information gained from the coarsest image level at other finer image levels. Both the second and the third components drastically lower the complexity. Fourth, the algorithm employs a highly effective parallelization scheme using adap- tive data partitioning, which gains high speedup. Experimental results, conducted on the BSD500 [1] and 500 whole-slide histological images from the National Lung Screening Trial (NLST)1 dataset, confirm that the proposed algorithm gained 13 times speedup compared with the baseline, and around 160 times compared with SLIC [11], without losing accuracy.
|
2309.16292
|
Licheng Wen
|
Licheng Wen, Daocheng Fu, Xin Li, Xinyu Cai, Tao Ma, Pinlong Cai, Min
Dou, Botian Shi, Liang He, Yu Qiao
|
DiLu: A Knowledge-Driven Approach to Autonomous Driving with Large
Language Models
|
Published as a conference paper at ICLR 2024
| null | null | null |
cs.RO cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent advancements in autonomous driving have relied on data-driven
approaches, which are widely adopted but face challenges including dataset
bias, overfitting, and uninterpretability. Drawing inspiration from the
knowledge-driven nature of human driving, we explore the question of how to
instill similar capabilities into autonomous driving systems and summarize a
paradigm that integrates an interactive environment, a driver agent, as well as
a memory component to address this question. Leveraging large language models
(LLMs) with emergent abilities, we propose the DiLu framework, which combines a
Reasoning and a Reflection module to enable the system to perform
decision-making based on common-sense knowledge and evolve continuously.
Extensive experiments prove DiLu's capability to accumulate experience and
demonstrate a significant advantage in generalization ability over
reinforcement learning-based methods. Moreover, DiLu is able to directly
acquire experiences from real-world datasets which highlights its potential to
be deployed on practical autonomous driving systems. To the best of our
knowledge, we are the first to leverage knowledge-driven capability in
decision-making for autonomous vehicles. Through the proposed DiLu framework,
LLM is strengthened to apply knowledge and to reason causally in the autonomous
driving domain. Project page: https://pjlab-adg.github.io/DiLu/
|
[
{
"created": "Thu, 28 Sep 2023 09:41:35 GMT",
"version": "v1"
},
{
"created": "Thu, 12 Oct 2023 11:11:47 GMT",
"version": "v2"
},
{
"created": "Thu, 22 Feb 2024 03:24:26 GMT",
"version": "v3"
}
] |
2024-02-23
|
[
[
"Wen",
"Licheng",
""
],
[
"Fu",
"Daocheng",
""
],
[
"Li",
"Xin",
""
],
[
"Cai",
"Xinyu",
""
],
[
"Ma",
"Tao",
""
],
[
"Cai",
"Pinlong",
""
],
[
"Dou",
"Min",
""
],
[
"Shi",
"Botian",
""
],
[
"He",
"Liang",
""
],
[
"Qiao",
"Yu",
""
]
] |
Recent advancements in autonomous driving have relied on data-driven approaches, which are widely adopted but face challenges including dataset bias, overfitting, and uninterpretability. Drawing inspiration from the knowledge-driven nature of human driving, we explore the question of how to instill similar capabilities into autonomous driving systems and summarize a paradigm that integrates an interactive environment, a driver agent, as well as a memory component to address this question. Leveraging large language models (LLMs) with emergent abilities, we propose the DiLu framework, which combines a Reasoning and a Reflection module to enable the system to perform decision-making based on common-sense knowledge and evolve continuously. Extensive experiments prove DiLu's capability to accumulate experience and demonstrate a significant advantage in generalization ability over reinforcement learning-based methods. Moreover, DiLu is able to directly acquire experiences from real-world datasets which highlights its potential to be deployed on practical autonomous driving systems. To the best of our knowledge, we are the first to leverage knowledge-driven capability in decision-making for autonomous vehicles. Through the proposed DiLu framework, LLM is strengthened to apply knowledge and to reason causally in the autonomous driving domain. Project page: https://pjlab-adg.github.io/DiLu/
|
2309.09120
|
Zhixuan Zhou
|
Kyrie Zhixuan Zhou, Madelyn Rose Sanfilippo
|
Public Perceptions of Gender Bias in Large Language Models: Cases of
ChatGPT and Ernie
| null | null | null | null |
cs.AI cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Large language models are quickly gaining momentum, yet are found to
demonstrate gender bias in their responses. In this paper, we conducted a
content analysis of social media discussions to gauge public perceptions of
gender bias in LLMs which are trained in different cultural contexts, i.e.,
ChatGPT, a US-based LLM, or Ernie, a China-based LLM. People shared both
observations of gender bias in their personal use and scientific findings about
gender bias in LLMs. A difference between the two LLMs was seen -- ChatGPT was
more often found to carry implicit gender bias, e.g., associating men and women
with different profession titles, while explicit gender bias was found in
Ernie's responses, e.g., overly promoting women's pursuit of marriage over
career. Based on the findings, we reflect on the impact of culture on gender
bias and propose governance recommendations to regulate gender bias in LLMs.
|
[
{
"created": "Sun, 17 Sep 2023 00:53:34 GMT",
"version": "v1"
}
] |
2023-09-19
|
[
[
"Zhou",
"Kyrie Zhixuan",
""
],
[
"Sanfilippo",
"Madelyn Rose",
""
]
] |
Large language models are quickly gaining momentum, yet are found to demonstrate gender bias in their responses. In this paper, we conducted a content analysis of social media discussions to gauge public perceptions of gender bias in LLMs which are trained in different cultural contexts, i.e., ChatGPT, a US-based LLM, or Ernie, a China-based LLM. People shared both observations of gender bias in their personal use and scientific findings about gender bias in LLMs. A difference between the two LLMs was seen -- ChatGPT was more often found to carry implicit gender bias, e.g., associating men and women with different profession titles, while explicit gender bias was found in Ernie's responses, e.g., overly promoting women's pursuit of marriage over career. Based on the findings, we reflect on the impact of culture on gender bias and propose governance recommendations to regulate gender bias in LLMs.
|
2403.17881
|
Gan Pei
|
Gan Pei, Jiangning Zhang, Menghan Hu, Zhenyu Zhang, Chengjie Wang,
Yunsheng Wu, Guangtao Zhai, Jian Yang, Chunhua Shen, Dacheng Tao
|
Deepfake Generation and Detection: A Benchmark and Survey
|
We closely follow the latest developments in
https://github.com/flyingby/Awesome-Deepfake-Generation-and-Detection
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Deepfake is a technology dedicated to creating highly realistic facial images
and videos under specific conditions, which has significant application
potential in fields such as entertainment, movie production, digital human
creation, to name a few. With the advancements in deep learning, techniques
primarily represented by Variational Autoencoders and Generative Adversarial
Networks have achieved impressive generation results. More recently, the
emergence of diffusion models with powerful generation capabilities has sparked
a renewed wave of research. In addition to deepfake generation, corresponding
detection technologies continuously evolve to regulate the potential misuse of
deepfakes, such as for privacy invasion and phishing attacks. This survey
comprehensively reviews the latest developments in deepfake generation and
detection, summarizing and analyzing current state-of-the-arts in this rapidly
evolving field. We first unify task definitions, comprehensively introduce
datasets and metrics, and discuss developing technologies. Then, we discuss the
development of several related sub-fields and focus on researching four
representative deepfake fields: face swapping, face reenactment, talking face
generation, and facial attribute editing, as well as forgery detection.
Subsequently, we comprehensively benchmark representative methods on popular
datasets for each field, fully evaluating the latest and influential published
works. Finally, we analyze challenges and future research directions of the
discussed fields.
|
[
{
"created": "Tue, 26 Mar 2024 17:12:34 GMT",
"version": "v1"
},
{
"created": "Tue, 9 Apr 2024 13:56:06 GMT",
"version": "v2"
},
{
"created": "Sat, 20 Apr 2024 09:06:02 GMT",
"version": "v3"
},
{
"created": "Thu, 16 May 2024 10:38:58 GMT",
"version": "v4"
}
] |
2024-05-17
|
[
[
"Pei",
"Gan",
""
],
[
"Zhang",
"Jiangning",
""
],
[
"Hu",
"Menghan",
""
],
[
"Zhang",
"Zhenyu",
""
],
[
"Wang",
"Chengjie",
""
],
[
"Wu",
"Yunsheng",
""
],
[
"Zhai",
"Guangtao",
""
],
[
"Yang",
"Jian",
""
],
[
"Shen",
"Chunhua",
""
],
[
"Tao",
"Dacheng",
""
]
] |
Deepfake is a technology dedicated to creating highly realistic facial images and videos under specific conditions, which has significant application potential in fields such as entertainment, movie production, digital human creation, to name a few. With the advancements in deep learning, techniques primarily represented by Variational Autoencoders and Generative Adversarial Networks have achieved impressive generation results. More recently, the emergence of diffusion models with powerful generation capabilities has sparked a renewed wave of research. In addition to deepfake generation, corresponding detection technologies continuously evolve to regulate the potential misuse of deepfakes, such as for privacy invasion and phishing attacks. This survey comprehensively reviews the latest developments in deepfake generation and detection, summarizing and analyzing current state-of-the-arts in this rapidly evolving field. We first unify task definitions, comprehensively introduce datasets and metrics, and discuss developing technologies. Then, we discuss the development of several related sub-fields and focus on researching four representative deepfake fields: face swapping, face reenactment, talking face generation, and facial attribute editing, as well as forgery detection. Subsequently, we comprehensively benchmark representative methods on popular datasets for each field, fully evaluating the latest and influential published works. Finally, we analyze challenges and future research directions of the discussed fields.
|
2101.01139
|
Sarah Aguasvivas Manzano
|
Sarah Aguasvivas Manzano, Patricia Xu, Khoi Ly, Robert Shepherd,
Nikolaus Correll
|
High-bandwidth nonlinear control for soft actuators with recursive
network models
|
International Symposium on Experimental Robotics (ISER) 2020, Malta
| null |
10.1007/978-3-030-71151-1_52
| null |
cs.RO cs.AI cs.NA cs.SE cs.SY eess.SY math.NA
|
http://creativecommons.org/licenses/by/4.0/
|
We present a high-bandwidth, lightweight, and nonlinear output tracking
technique for soft actuators that combines parsimonious recursive layers for
forward output predictions and online optimization using Newton-Raphson. This
technique allows for reduced model sizes and increased control loop frequencies
when compared with conventional RNN models. Experimental results of this
controller prototype on a single soft actuator with soft positional sensors
indicate effective tracking of referenced spatial trajectories and rejection of
mechanical and electromagnetic disturbances. These are evidenced by root mean
squared path tracking errors (RMSE) of 1.8mm using a fully connected (FC)
substructure, 1.62mm using a gated recurrent unit (GRU) and 2.11mm using a long
short term memory (LSTM) unit, all averaged over three tasks. Among these
models, the highest flash memory requirement is 2.22kB enabling co-location of
controller and actuator.
|
[
{
"created": "Mon, 4 Jan 2021 18:12:41 GMT",
"version": "v1"
}
] |
2022-05-24
|
[
[
"Manzano",
"Sarah Aguasvivas",
""
],
[
"Xu",
"Patricia",
""
],
[
"Ly",
"Khoi",
""
],
[
"Shepherd",
"Robert",
""
],
[
"Correll",
"Nikolaus",
""
]
] |
We present a high-bandwidth, lightweight, and nonlinear output tracking technique for soft actuators that combines parsimonious recursive layers for forward output predictions and online optimization using Newton-Raphson. This technique allows for reduced model sizes and increased control loop frequencies when compared with conventional RNN models. Experimental results of this controller prototype on a single soft actuator with soft positional sensors indicate effective tracking of referenced spatial trajectories and rejection of mechanical and electromagnetic disturbances. These are evidenced by root mean squared path tracking errors (RMSE) of 1.8mm using a fully connected (FC) substructure, 1.62mm using a gated recurrent unit (GRU) and 2.11mm using a long short term memory (LSTM) unit, all averaged over three tasks. Among these models, the highest flash memory requirement is 2.22kB enabling co-location of controller and actuator.
|
2404.17754
|
Kohei Fujita
|
Tsuyoshi Ichimura, Kohei Fujita, Ryota Kusakabe, Hiroyuki Fujiwara,
Muneo Hori, Maddegedara Lalith
|
Development of an Estimation Method for the Seismic Motion
Reproducibility of a Three-dimensional Ground Structure Model by combining
Surface-observed Seismic Motion and Three-dimensional Seismic Motion Analysis
|
16 pages, 10 figures, accepted for IHPCES/ICCS 2024 (14th
International Workshop on Advances in High-Performance Computational Earth
Sciences: NumericalMethods, Frameworks & Applications / 24th International
Conference on Computational Science)
|
ICCS 2024. ICCS 2024. Lecture Notes in Computer Science, vol
14834. Springer, Cham
|
10.1007/978-3-031-63759-9_32
| null |
cs.CE
|
http://creativecommons.org/licenses/by/4.0/
|
The ground structure can substantially influence seismic ground motion
underscoring the need to develop a ground structure model with sufficient
reliability in terms of ground motion estimation for earthquake damage
mitigation. While many methods for generating ground structure models have been
proposed and used in practice, there remains room for enhancing their
reliability. In this study, amid many candidate 3D ground structure models
generated from geotechnical engineering knowledge, we propose a method for
selecting a credible 3D ground structure model capable of reproducing observed
earthquake ground motion, utilizing seismic ground motion data solely observed
at the ground surface and employing 3D seismic ground motion analysis. Through
a numerical experiment, we illustrate the efficacy of this approach. By
conducting $10^2$-$10^3$ cases of fast 3D seismic wave propagation analyses
using graphic processing units (GPUs), we demonstrate that a credible 3D ground
structure model is selected according to the quantity of seismic motion
information. We show the effectiveness of the proposed method by showing that
the accuracy of seismic motions using ground structure models that were
selected from the pool of candidate models is higher than that using ground
structure models that were not selected from the pool of candidate models.
|
[
{
"created": "Sat, 27 Apr 2024 02:04:57 GMT",
"version": "v1"
}
] |
2024-07-02
|
[
[
"Ichimura",
"Tsuyoshi",
""
],
[
"Fujita",
"Kohei",
""
],
[
"Kusakabe",
"Ryota",
""
],
[
"Fujiwara",
"Hiroyuki",
""
],
[
"Hori",
"Muneo",
""
],
[
"Lalith",
"Maddegedara",
""
]
] |
The ground structure can substantially influence seismic ground motion underscoring the need to develop a ground structure model with sufficient reliability in terms of ground motion estimation for earthquake damage mitigation. While many methods for generating ground structure models have been proposed and used in practice, there remains room for enhancing their reliability. In this study, amid many candidate 3D ground structure models generated from geotechnical engineering knowledge, we propose a method for selecting a credible 3D ground structure model capable of reproducing observed earthquake ground motion, utilizing seismic ground motion data solely observed at the ground surface and employing 3D seismic ground motion analysis. Through a numerical experiment, we illustrate the efficacy of this approach. By conducting $10^2$-$10^3$ cases of fast 3D seismic wave propagation analyses using graphic processing units (GPUs), we demonstrate that a credible 3D ground structure model is selected according to the quantity of seismic motion information. We show the effectiveness of the proposed method by showing that the accuracy of seismic motions using ground structure models that were selected from the pool of candidate models is higher than that using ground structure models that were not selected from the pool of candidate models.
|
1711.02059
|
Serhii Nazarovets
|
Vladimir Lazarev, Serhii Nazarovets and Alexey Skalaban
|
Evaluation of research activities of universities of Ukraine and
Belarus: a set of bibliometric indicators and its implementation
| null |
Romanian Journal of Library and Information Science. 2017, 13(3):
75-84
|
10.26660/rrbsi.2017.13.3.75
| null |
cs.DL
|
http://creativecommons.org/licenses/by/4.0/
|
Monitoring bibliometric indicators of University rankings is considered as a
subject of a University library activity. In order to fulfill comparative
assessment of research activities of the universities of Ukraine and Belarus
the authors introduced a set of bibliometric indicators. A comparative
assessment of the research activities of corresponding universities was
fulfilled; the data on the leading universities are presented. The sensitivity
of the one of the indicators to rapid changes of the research activity of
universities and the fact that the other one is normalized across the fields of
science condition advantage of the proposed set over the one that was used in
practice of the corresponding national rankings.
|
[
{
"created": "Mon, 6 Nov 2017 18:05:40 GMT",
"version": "v1"
}
] |
2018-01-11
|
[
[
"Lazarev",
"Vladimir",
""
],
[
"Nazarovets",
"Serhii",
""
],
[
"Skalaban",
"Alexey",
""
]
] |
Monitoring bibliometric indicators of University rankings is considered as a subject of a University library activity. In order to fulfill comparative assessment of research activities of the universities of Ukraine and Belarus the authors introduced a set of bibliometric indicators. A comparative assessment of the research activities of corresponding universities was fulfilled; the data on the leading universities are presented. The sensitivity of the one of the indicators to rapid changes of the research activity of universities and the fact that the other one is normalized across the fields of science condition advantage of the proposed set over the one that was used in practice of the corresponding national rankings.
|
2006.07350
|
Kashif Inayat
|
Usama Khalid, Muhammad Abdullah and Kashif Inayat
|
Exploiting ML algorithms for Efficient Detection and Prevention of
JavaScript-XSS Attacks in Android Based Hybrid Applications
| null | null | null | null |
cs.CR cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The development and analysis of mobile applications in term of security have
become an active research area from many years as many apps are vulnerable to
different attacks. Especially the concept of hybrid applications has emerged in
the last three years where applications are developed in both native and web
languages because the use of web languages raises certain security risks in
hybrid mobile applications as it creates possible channels where malicious code
can be injected inside the application. WebView is an important component in
hybrid mobile applications which used to implements a sandbox mechanism to
protect the local resources of smartphone devices from un-authorized access of
JavaScript. However, the WebView application program interfaces (APIs) also
have security issues. For example, an attacker can attack the hybrid
application via JavaScript code by bypassing the sandbox security through
accessing the public methods of the applications. Cross-site scripting (XSS) is
one of the most popular malicious code injection technique for accessing the
public methods of the application through JavaScript. This research proposes a
framework for detection and prevention of XSS attacks in hybrid applications
using state-of-the-art machine learning (ML) algorithms. The detection of the
attacks have been perform by exploiting the registered Java object features.
The dataset and the sample hybrid applications have been developed using the
android studio. Then the widely used toolkit, RapidMiner, has been used for
empirical analysis. The results reveal that the ensemble based Random Forest
algorithm outperforms other algorithms and achieves both the accuracy and
F-measures as high as of 99%.
|
[
{
"created": "Fri, 12 Jun 2020 17:39:26 GMT",
"version": "v1"
},
{
"created": "Thu, 30 Jul 2020 06:37:37 GMT",
"version": "v2"
}
] |
2020-07-31
|
[
[
"Khalid",
"Usama",
""
],
[
"Abdullah",
"Muhammad",
""
],
[
"Inayat",
"Kashif",
""
]
] |
The development and analysis of mobile applications in term of security have become an active research area from many years as many apps are vulnerable to different attacks. Especially the concept of hybrid applications has emerged in the last three years where applications are developed in both native and web languages because the use of web languages raises certain security risks in hybrid mobile applications as it creates possible channels where malicious code can be injected inside the application. WebView is an important component in hybrid mobile applications which used to implements a sandbox mechanism to protect the local resources of smartphone devices from un-authorized access of JavaScript. However, the WebView application program interfaces (APIs) also have security issues. For example, an attacker can attack the hybrid application via JavaScript code by bypassing the sandbox security through accessing the public methods of the applications. Cross-site scripting (XSS) is one of the most popular malicious code injection technique for accessing the public methods of the application through JavaScript. This research proposes a framework for detection and prevention of XSS attacks in hybrid applications using state-of-the-art machine learning (ML) algorithms. The detection of the attacks have been perform by exploiting the registered Java object features. The dataset and the sample hybrid applications have been developed using the android studio. Then the widely used toolkit, RapidMiner, has been used for empirical analysis. The results reveal that the ensemble based Random Forest algorithm outperforms other algorithms and achieves both the accuracy and F-measures as high as of 99%.
|
2009.07227
|
Tiankai Xie
|
Tiankai Xie, Yuxin Ma, Hanghang Tong, My T. Thai, Ross Maciejewski
|
Auditing the Sensitivity of Graph-based Ranking with Visual Analytics
|
11 pages, accepted by IEEE Transactions on Visualization and Computer
Graphics
| null | null | null |
cs.SI cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Graph mining plays a pivotal role across a number of disciplines, and a
variety of algorithms have been developed to answer who/what type questions.
For example, what items shall we recommend to a given user on an e-commerce
platform? The answers to such questions are typically returned in the form of a
ranked list, and graph-based ranking methods are widely used in industrial
information retrieval settings. However, these ranking algorithms have a
variety of sensitivities, and even small changes in rank can lead to vast
reductions in product sales and page hits. As such, there is a need for tools
and methods that can help model developers and analysts explore the
sensitivities of graph ranking algorithms with respect to perturbations within
the graph structure. In this paper, we present a visual analytics framework for
explaining and exploring the sensitivity of any graph-based ranking algorithm
by performing perturbation-based what-if analysis. We demonstrate our framework
through three case studies inspecting the sensitivity of two classic
graph-based ranking algorithms (PageRank and HITS) as applied to rankings in
political news media and social networks.
|
[
{
"created": "Tue, 15 Sep 2020 17:07:20 GMT",
"version": "v1"
}
] |
2020-09-16
|
[
[
"Xie",
"Tiankai",
""
],
[
"Ma",
"Yuxin",
""
],
[
"Tong",
"Hanghang",
""
],
[
"Thai",
"My T.",
""
],
[
"Maciejewski",
"Ross",
""
]
] |
Graph mining plays a pivotal role across a number of disciplines, and a variety of algorithms have been developed to answer who/what type questions. For example, what items shall we recommend to a given user on an e-commerce platform? The answers to such questions are typically returned in the form of a ranked list, and graph-based ranking methods are widely used in industrial information retrieval settings. However, these ranking algorithms have a variety of sensitivities, and even small changes in rank can lead to vast reductions in product sales and page hits. As such, there is a need for tools and methods that can help model developers and analysts explore the sensitivities of graph ranking algorithms with respect to perturbations within the graph structure. In this paper, we present a visual analytics framework for explaining and exploring the sensitivity of any graph-based ranking algorithm by performing perturbation-based what-if analysis. We demonstrate our framework through three case studies inspecting the sensitivity of two classic graph-based ranking algorithms (PageRank and HITS) as applied to rankings in political news media and social networks.
|
1910.11683
|
Antony Thomas
|
Antony Thomas, Fulvio Mastrogiovanni and Marco Baglietto
|
Task-Motion Planning for Navigation in Belief Space
|
Accepted for publication in the proceedings of the International
Symposium on Robotics Research (ISRR) 2019. arXiv admin note: text overlap
with arXiv:1908.10227
| null | null | null |
cs.RO cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present an integrated Task-Motion Planning (TMP) framework for navigation
in large-scale environment. Autonomous robots operating in real world complex
scenarios require planning in the discrete (task) space and the continuous
(motion) space. In knowledge intensive domains, on the one hand, a robot has to
reason at the highest-level, for example the regions to navigate to; on the
other hand, the feasibility of the respective navigation tasks have to be
checked at the execution level. This presents a need for motion-planning-aware
task planners. We discuss a probabilistically complete approach that leverages
this task-motion interaction for navigating in indoor domains, returning a plan
that is optimal at the task-level. Furthermore, our framework is intended for
motion planning under motion and sensing uncertainty, which is formally known
as belief space planning. The underlying methodology is validated with a
simulated office environment in Gazebo. In addition, we discuss the limitations
and provide suggestions for improvements and future work.
|
[
{
"created": "Thu, 24 Oct 2019 10:11:50 GMT",
"version": "v1"
}
] |
2019-10-28
|
[
[
"Thomas",
"Antony",
""
],
[
"Mastrogiovanni",
"Fulvio",
""
],
[
"Baglietto",
"Marco",
""
]
] |
We present an integrated Task-Motion Planning (TMP) framework for navigation in large-scale environment. Autonomous robots operating in real world complex scenarios require planning in the discrete (task) space and the continuous (motion) space. In knowledge intensive domains, on the one hand, a robot has to reason at the highest-level, for example the regions to navigate to; on the other hand, the feasibility of the respective navigation tasks have to be checked at the execution level. This presents a need for motion-planning-aware task planners. We discuss a probabilistically complete approach that leverages this task-motion interaction for navigating in indoor domains, returning a plan that is optimal at the task-level. Furthermore, our framework is intended for motion planning under motion and sensing uncertainty, which is formally known as belief space planning. The underlying methodology is validated with a simulated office environment in Gazebo. In addition, we discuss the limitations and provide suggestions for improvements and future work.
|
1805.02266
|
Vered Shwartz
|
Max Glockner, Vered Shwartz, and Yoav Goldberg
|
Breaking NLI Systems with Sentences that Require Simple Lexical
Inferences
|
6 pages, short paper at ACL 2018
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We create a new NLI test set that shows the deficiency of state-of-the-art
models in inferences that require lexical and world knowledge. The new examples
are simpler than the SNLI test set, containing sentences that differ by at most
one word from sentences in the training set. Yet, the performance on the new
test set is substantially worse across systems trained on SNLI, demonstrating
that these systems are limited in their generalization ability, failing to
capture many simple inferences.
|
[
{
"created": "Sun, 6 May 2018 18:49:48 GMT",
"version": "v1"
}
] |
2018-05-08
|
[
[
"Glockner",
"Max",
""
],
[
"Shwartz",
"Vered",
""
],
[
"Goldberg",
"Yoav",
""
]
] |
We create a new NLI test set that shows the deficiency of state-of-the-art models in inferences that require lexical and world knowledge. The new examples are simpler than the SNLI test set, containing sentences that differ by at most one word from sentences in the training set. Yet, the performance on the new test set is substantially worse across systems trained on SNLI, demonstrating that these systems are limited in their generalization ability, failing to capture many simple inferences.
|
1803.06978
|
Cihang Xie
|
Cihang Xie, Zhishuai Zhang, Yuyin Zhou, Song Bai, Jianyu Wang, Zhou
Ren, Alan Yuille
|
Improving Transferability of Adversarial Examples with Input Diversity
|
CVPR 2019, code is available at:
https://github.com/cihangxie/DI-2-FGSM
| null | null | null |
cs.CV cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Though CNNs have achieved the state-of-the-art performance on various vision
tasks, they are vulnerable to adversarial examples --- crafted by adding
human-imperceptible perturbations to clean images. However, most of the
existing adversarial attacks only achieve relatively low success rates under
the challenging black-box setting, where the attackers have no knowledge of the
model structure and parameters. To this end, we propose to improve the
transferability of adversarial examples by creating diverse input patterns.
Instead of only using the original images to generate adversarial examples, our
method applies random transformations to the input images at each iteration.
Extensive experiments on ImageNet show that the proposed attack method can
generate adversarial examples that transfer much better to different networks
than existing baselines. By evaluating our method against top defense solutions
and official baselines from NIPS 2017 adversarial competition, the enhanced
attack reaches an average success rate of 73.0%, which outperforms the top-1
attack submission in the NIPS competition by a large margin of 6.6%. We hope
that our proposed attack strategy can serve as a strong benchmark baseline for
evaluating the robustness of networks to adversaries and the effectiveness of
different defense methods in the future. Code is available at
https://github.com/cihangxie/DI-2-FGSM.
|
[
{
"created": "Mon, 19 Mar 2018 15:07:51 GMT",
"version": "v1"
},
{
"created": "Mon, 11 Jun 2018 00:15:38 GMT",
"version": "v2"
},
{
"created": "Wed, 27 Mar 2019 02:29:17 GMT",
"version": "v3"
},
{
"created": "Sat, 1 Jun 2019 17:12:24 GMT",
"version": "v4"
}
] |
2019-06-04
|
[
[
"Xie",
"Cihang",
""
],
[
"Zhang",
"Zhishuai",
""
],
[
"Zhou",
"Yuyin",
""
],
[
"Bai",
"Song",
""
],
[
"Wang",
"Jianyu",
""
],
[
"Ren",
"Zhou",
""
],
[
"Yuille",
"Alan",
""
]
] |
Though CNNs have achieved the state-of-the-art performance on various vision tasks, they are vulnerable to adversarial examples --- crafted by adding human-imperceptible perturbations to clean images. However, most of the existing adversarial attacks only achieve relatively low success rates under the challenging black-box setting, where the attackers have no knowledge of the model structure and parameters. To this end, we propose to improve the transferability of adversarial examples by creating diverse input patterns. Instead of only using the original images to generate adversarial examples, our method applies random transformations to the input images at each iteration. Extensive experiments on ImageNet show that the proposed attack method can generate adversarial examples that transfer much better to different networks than existing baselines. By evaluating our method against top defense solutions and official baselines from NIPS 2017 adversarial competition, the enhanced attack reaches an average success rate of 73.0%, which outperforms the top-1 attack submission in the NIPS competition by a large margin of 6.6%. We hope that our proposed attack strategy can serve as a strong benchmark baseline for evaluating the robustness of networks to adversaries and the effectiveness of different defense methods in the future. Code is available at https://github.com/cihangxie/DI-2-FGSM.
|
2010.03106
|
Ruoqi Shen
|
Yin Tat Lee, Ruoqi Shen, Kevin Tian
|
Structured Logconcave Sampling with a Restricted Gaussian Oracle
|
58 pages. The results of Section 5 of this paper, as well as an
empirical evaluation, appeared earlier as arXiv:2006.05976. This version
fixes an error in the proof of Theorem 1, see Section 1.4
| null | null | null |
cs.DS cs.LG math.OC stat.CO stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We give algorithms for sampling several structured logconcave families to
high accuracy. We further develop a reduction framework, inspired by proximal
point methods in convex optimization, which bootstraps samplers for regularized
densities to improve dependences on problem conditioning. A key ingredient in
our framework is the notion of a "restricted Gaussian oracle" (RGO) for $g:
\mathbb{R}^d \rightarrow \mathbb{R}$, which is a sampler for distributions
whose negative log-likelihood sums a quadratic and $g$. By combining our
reduction framework with our new samplers, we obtain the following bounds for
sampling structured distributions to total variation distance $\epsilon$. For
composite densities $\exp(-f(x) - g(x))$, where $f$ has condition number
$\kappa$ and convex (but possibly non-smooth) $g$ admits an RGO, we obtain a
mixing time of $O(\kappa d \log^3\frac{\kappa d}{\epsilon})$, matching the
state-of-the-art non-composite bound; no composite samplers with better mixing
than general-purpose logconcave samplers were previously known. For logconcave
finite sums $\exp(-F(x))$, where $F(x) = \frac{1}{n}\sum_{i \in [n]} f_i(x)$
has condition number $\kappa$, we give a sampler querying $\widetilde{O}(n +
\kappa\max(d, \sqrt{nd}))$ gradient oracles to $\{f_i\}_{i \in [n]}$; no
high-accuracy samplers with nontrivial gradient query complexity were
previously known. For densities with condition number $\kappa$, we give an
algorithm obtaining mixing time $O(\kappa d \log^2\frac{\kappa d}{\epsilon})$,
improving the prior state-of-the-art by a logarithmic factor with a
significantly simpler analysis; we also show a zeroth-order algorithm attains
the same query complexity.
|
[
{
"created": "Wed, 7 Oct 2020 01:43:07 GMT",
"version": "v1"
},
{
"created": "Thu, 8 Oct 2020 20:17:48 GMT",
"version": "v2"
},
{
"created": "Mon, 9 Nov 2020 02:19:53 GMT",
"version": "v3"
},
{
"created": "Fri, 22 Oct 2021 06:25:01 GMT",
"version": "v4"
}
] |
2021-10-25
|
[
[
"Lee",
"Yin Tat",
""
],
[
"Shen",
"Ruoqi",
""
],
[
"Tian",
"Kevin",
""
]
] |
We give algorithms for sampling several structured logconcave families to high accuracy. We further develop a reduction framework, inspired by proximal point methods in convex optimization, which bootstraps samplers for regularized densities to improve dependences on problem conditioning. A key ingredient in our framework is the notion of a "restricted Gaussian oracle" (RGO) for $g: \mathbb{R}^d \rightarrow \mathbb{R}$, which is a sampler for distributions whose negative log-likelihood sums a quadratic and $g$. By combining our reduction framework with our new samplers, we obtain the following bounds for sampling structured distributions to total variation distance $\epsilon$. For composite densities $\exp(-f(x) - g(x))$, where $f$ has condition number $\kappa$ and convex (but possibly non-smooth) $g$ admits an RGO, we obtain a mixing time of $O(\kappa d \log^3\frac{\kappa d}{\epsilon})$, matching the state-of-the-art non-composite bound; no composite samplers with better mixing than general-purpose logconcave samplers were previously known. For logconcave finite sums $\exp(-F(x))$, where $F(x) = \frac{1}{n}\sum_{i \in [n]} f_i(x)$ has condition number $\kappa$, we give a sampler querying $\widetilde{O}(n + \kappa\max(d, \sqrt{nd}))$ gradient oracles to $\{f_i\}_{i \in [n]}$; no high-accuracy samplers with nontrivial gradient query complexity were previously known. For densities with condition number $\kappa$, we give an algorithm obtaining mixing time $O(\kappa d \log^2\frac{\kappa d}{\epsilon})$, improving the prior state-of-the-art by a logarithmic factor with a significantly simpler analysis; we also show a zeroth-order algorithm attains the same query complexity.
|
1505.02377
|
Renjie Liao
|
Renjie Liao, Jianping Shi, Ziyang Ma, Jun Zhu and Jiaya Jia
|
Bounded-Distortion Metric Learning
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Metric learning aims to embed one metric space into another to benefit tasks
like classification and clustering. Although a greatly distorted metric space
has a high degree of freedom to fit training data, it is prone to overfitting
and numerical inaccuracy. This paper presents {\it bounded-distortion metric
learning} (BDML), a new metric learning framework which amounts to finding an
optimal Mahalanobis metric space with a bounded-distortion constraint. An
efficient solver based on the multiplicative weights update method is proposed.
Moreover, we generalize BDML to pseudo-metric learning and devise the
semidefinite relaxation and a randomized algorithm to approximately solve it.
We further provide theoretical analysis to show that distortion is a key
ingredient for stability and generalization ability of our BDML algorithm.
Extensive experiments on several benchmark datasets yield promising results.
|
[
{
"created": "Sun, 10 May 2015 13:27:36 GMT",
"version": "v1"
}
] |
2015-05-12
|
[
[
"Liao",
"Renjie",
""
],
[
"Shi",
"Jianping",
""
],
[
"Ma",
"Ziyang",
""
],
[
"Zhu",
"Jun",
""
],
[
"Jia",
"Jiaya",
""
]
] |
Metric learning aims to embed one metric space into another to benefit tasks like classification and clustering. Although a greatly distorted metric space has a high degree of freedom to fit training data, it is prone to overfitting and numerical inaccuracy. This paper presents {\it bounded-distortion metric learning} (BDML), a new metric learning framework which amounts to finding an optimal Mahalanobis metric space with a bounded-distortion constraint. An efficient solver based on the multiplicative weights update method is proposed. Moreover, we generalize BDML to pseudo-metric learning and devise the semidefinite relaxation and a randomized algorithm to approximately solve it. We further provide theoretical analysis to show that distortion is a key ingredient for stability and generalization ability of our BDML algorithm. Extensive experiments on several benchmark datasets yield promising results.
|
1009.4524
|
Monica
|
Monica and Ajay K Sharma (Dr B R Ambedkar National Institute of
Technology, India)
|
Comparative Investigation for Energy Consumption of Different Chipsets
Based on Scheduling for Wireless Sensor Networks
|
17 pages, Based on scheduling for Wireless Sensor Networks
|
International Journal of Computer Networks & Communications
(IJCNC) Vol.2, No.5, September 2010
|
10.5121/ijcnc.2010.2511
| null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Rapid progress in microelectromechanical system (MEMS) and radio frequency
(RF) design has enabled the development of low-power, inexpensive, and
network-enabled microsensors. These sensor nodes are capable of capturing
various physical information, such as temperature, pressure, motion of an
object, etc as well as mapping such physical characteristics of the environment
to quantitative measurements. A typical wireless sensor network (WSN) consists
of hundreds to thousands of such sensor nodes linked by a wireless medium. In
this paper, we present a comparative investigation of energy consumption for
few commercially available chipsets such as TR1001, CC1000 and CC1010 based on
different scheduling methods for two types of deployment strategies. We
conducted our experiment within the OMNeT++ simulator.
|
[
{
"created": "Thu, 23 Sep 2010 06:12:22 GMT",
"version": "v1"
}
] |
2010-09-24
|
[
[
"Monica",
"",
"",
"Dr B R Ambedkar National Institute of\n Technology, India"
],
[
"Sharma",
"Ajay K",
"",
"Dr B R Ambedkar National Institute of\n Technology, India"
]
] |
Rapid progress in microelectromechanical system (MEMS) and radio frequency (RF) design has enabled the development of low-power, inexpensive, and network-enabled microsensors. These sensor nodes are capable of capturing various physical information, such as temperature, pressure, motion of an object, etc as well as mapping such physical characteristics of the environment to quantitative measurements. A typical wireless sensor network (WSN) consists of hundreds to thousands of such sensor nodes linked by a wireless medium. In this paper, we present a comparative investigation of energy consumption for few commercially available chipsets such as TR1001, CC1000 and CC1010 based on different scheduling methods for two types of deployment strategies. We conducted our experiment within the OMNeT++ simulator.
|
2202.06083
|
Tomoya Murata
|
Tomoya Murata and Taiji Suzuki
|
Escaping Saddle Points with Bias-Variance Reduced Local Perturbed SGD
for Communication Efficient Nonconvex Distributed Learning
|
50 pages
| null | null | null |
cs.LG cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In recent centralized nonconvex distributed learning and federated learning,
local methods are one of the promising approaches to reduce communication time.
However, existing work has mainly focused on studying first-order optimality
guarantees. On the other side, second-order optimality guaranteed algorithms,
i.e., algorithms escaping saddle points, have been extensively studied in the
non-distributed optimization literature. In this paper, we study a new local
algorithm called Bias-Variance Reduced Local Perturbed SGD (BVR-L-PSGD), that
combines the existing bias-variance reduced gradient estimator with parameter
perturbation to find second-order optimal points in centralized nonconvex
distributed optimization. BVR-L-PSGD enjoys second-order optimality with nearly
the same communication complexity as the best known one of BVR-L-SGD to find
first-order optimality. Particularly, the communication complexity is better
than non-local methods when the local datasets heterogeneity is smaller than
the smoothness of the local loss. In an extreme case, the communication
complexity approaches to $\widetilde \Theta(1)$ when the local datasets
heterogeneity goes to zero. Numerical results validate our theoretical
findings.
|
[
{
"created": "Sat, 12 Feb 2022 15:12:17 GMT",
"version": "v1"
},
{
"created": "Tue, 31 May 2022 08:49:31 GMT",
"version": "v2"
},
{
"created": "Wed, 12 Oct 2022 11:21:56 GMT",
"version": "v3"
}
] |
2022-10-13
|
[
[
"Murata",
"Tomoya",
""
],
[
"Suzuki",
"Taiji",
""
]
] |
In recent centralized nonconvex distributed learning and federated learning, local methods are one of the promising approaches to reduce communication time. However, existing work has mainly focused on studying first-order optimality guarantees. On the other side, second-order optimality guaranteed algorithms, i.e., algorithms escaping saddle points, have been extensively studied in the non-distributed optimization literature. In this paper, we study a new local algorithm called Bias-Variance Reduced Local Perturbed SGD (BVR-L-PSGD), that combines the existing bias-variance reduced gradient estimator with parameter perturbation to find second-order optimal points in centralized nonconvex distributed optimization. BVR-L-PSGD enjoys second-order optimality with nearly the same communication complexity as the best known one of BVR-L-SGD to find first-order optimality. Particularly, the communication complexity is better than non-local methods when the local datasets heterogeneity is smaller than the smoothness of the local loss. In an extreme case, the communication complexity approaches to $\widetilde \Theta(1)$ when the local datasets heterogeneity goes to zero. Numerical results validate our theoretical findings.
|
2103.09605
|
Luca Di Giammarino
|
Luca Di Giammarino, Irvin Aloise, Cyrill Stachniss and Giorgio
Grisetti
|
Visual Place Recognition using LiDAR Intensity Information
|
7 pages, 6 figures
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Robots and autonomous systems need to know where they are within a map to
navigate effectively. Thus, simultaneous localization and mapping or SLAM is a
common building block of robot navigation systems. When building a map via a
SLAM system, robots need to re-recognize places to find loop closure and reduce
the odometry drift. Image-based place recognition received a lot of attention
in computer vision, and in this work, we investigate how such approaches can be
used for 3D LiDAR data. Recent LiDAR sensors produce high-resolution 3D scans
in combination with comparably stable intensity measurements. Through a
cylindrical projection, we can turn this information into a panoramic image. As
a result, we can apply techniques from visual place recognition to LiDAR
intensity data. The question of how well this approach works in practice has
not been answered so far. This paper provides an analysis of how such visual
techniques can be with LiDAR data, and we provide an evaluation on different
datasets. Our results suggest that this form of place recognition is possible
and an effective means for determining loop closures.
|
[
{
"created": "Wed, 17 Mar 2021 12:44:30 GMT",
"version": "v1"
}
] |
2021-03-18
|
[
[
"Di Giammarino",
"Luca",
""
],
[
"Aloise",
"Irvin",
""
],
[
"Stachniss",
"Cyrill",
""
],
[
"Grisetti",
"Giorgio",
""
]
] |
Robots and autonomous systems need to know where they are within a map to navigate effectively. Thus, simultaneous localization and mapping or SLAM is a common building block of robot navigation systems. When building a map via a SLAM system, robots need to re-recognize places to find loop closure and reduce the odometry drift. Image-based place recognition received a lot of attention in computer vision, and in this work, we investigate how such approaches can be used for 3D LiDAR data. Recent LiDAR sensors produce high-resolution 3D scans in combination with comparably stable intensity measurements. Through a cylindrical projection, we can turn this information into a panoramic image. As a result, we can apply techniques from visual place recognition to LiDAR intensity data. The question of how well this approach works in practice has not been answered so far. This paper provides an analysis of how such visual techniques can be with LiDAR data, and we provide an evaluation on different datasets. Our results suggest that this form of place recognition is possible and an effective means for determining loop closures.
|
1802.09348
|
Dian Pratiwi
|
Risky Armansyah, Dian Pratiwi
|
Game of the Cursed Prince based on Android
|
6 pages, 17 figures
|
International Journal of Computer Applications, Volume 179 -
Number 19, 2018
|
10.5120/ijca2018916333
| null |
cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Nowadays Games become an entertainment alternative for various circles,
industry and game development business is also a profitable industry. In
Indonesia the amount of game consumption is very high, especially the console
game type RPG (Role Playing Game). The task of this research is developing game
software using Unity 3D to create an Android-based RPG game app. The story is
packed with RPG genres so the player can feel the main role of the storys
imagination. The game to be built is a game titled The Cursed Prince. Users
will get the sensation of royal adventure. Multiplayer game system, graphics in
3D game, The main character in this game is Prince, enemies in this game are
wizards and monsters, Game is not limited time to complete. And the game can be
saved, so it can be reopened. The game of The Cursed Prince can be part of
Indonesian Industry Gaming development.
|
[
{
"created": "Mon, 19 Feb 2018 14:24:52 GMT",
"version": "v1"
}
] |
2018-02-27
|
[
[
"Armansyah",
"Risky",
""
],
[
"Pratiwi",
"Dian",
""
]
] |
Nowadays Games become an entertainment alternative for various circles, industry and game development business is also a profitable industry. In Indonesia the amount of game consumption is very high, especially the console game type RPG (Role Playing Game). The task of this research is developing game software using Unity 3D to create an Android-based RPG game app. The story is packed with RPG genres so the player can feel the main role of the storys imagination. The game to be built is a game titled The Cursed Prince. Users will get the sensation of royal adventure. Multiplayer game system, graphics in 3D game, The main character in this game is Prince, enemies in this game are wizards and monsters, Game is not limited time to complete. And the game can be saved, so it can be reopened. The game of The Cursed Prince can be part of Indonesian Industry Gaming development.
|
0904.0768
|
Andrew Thangaraj
|
Srimathy Srinivasan, Andrew Thangaraj
|
Codes on Planar Graphs
|
several improvements in presentation; more figures for illustration
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Codes defined on graphs and their properties have been subjects of intense
recent research. On the practical side, constructions for capacity-approaching
codes are graphical. On the theoretical side, codes on graphs provide several
intriguing problems in the intersection of coding theory and graph theory. In
this paper, we study codes defined by planar Tanner graphs. We derive an upper
bound on minimum distance $d$ of such codes as a function of the code rate $R$
for $R \ge 5/8$. The bound is given by $$d\le \lceil \frac{7-8R}{2(2R-1)}
\rceil + 3\le 7.$$ Among the interesting conclusions of this result are the
following: (1) planar graphs do not support asymptotically good codes, and (2)
finite-length, high-rate codes on graphs with high minimum distance will
necessarily be non-planar.
|
[
{
"created": "Sun, 5 Apr 2009 12:43:37 GMT",
"version": "v1"
},
{
"created": "Fri, 15 May 2009 06:42:29 GMT",
"version": "v2"
}
] |
2009-05-15
|
[
[
"Srinivasan",
"Srimathy",
""
],
[
"Thangaraj",
"Andrew",
""
]
] |
Codes defined on graphs and their properties have been subjects of intense recent research. On the practical side, constructions for capacity-approaching codes are graphical. On the theoretical side, codes on graphs provide several intriguing problems in the intersection of coding theory and graph theory. In this paper, we study codes defined by planar Tanner graphs. We derive an upper bound on minimum distance $d$ of such codes as a function of the code rate $R$ for $R \ge 5/8$. The bound is given by $$d\le \lceil \frac{7-8R}{2(2R-1)} \rceil + 3\le 7.$$ Among the interesting conclusions of this result are the following: (1) planar graphs do not support asymptotically good codes, and (2) finite-length, high-rate codes on graphs with high minimum distance will necessarily be non-planar.
|
2008.02986
|
Binh-Son Hua
|
Zhiyuan Zhang, Binh-Son Hua, Wei Chen, Yibin Tian, Sai-Kit Yeung
|
Global Context Aware Convolutions for 3D Point Cloud Understanding
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent advances in deep learning for 3D point clouds have shown great
promises in scene understanding tasks thanks to the introduction of convolution
operators to consume 3D point clouds directly in a neural network. Point cloud
data, however, could have arbitrary rotations, especially those acquired from
3D scanning. Recent works show that it is possible to design point cloud
convolutions with rotation invariance property, but such methods generally do
not perform as well as translation-invariant only convolution. We found that a
key reason is that compared to point coordinates, rotation-invariant features
consumed by point cloud convolution are not as distinctive. To address this
problem, we propose a novel convolution operator that enhances feature
distinction by integrating global context information from the input point
cloud to the convolution. To this end, a globally weighted local reference
frame is constructed in each point neighborhood in which the local point set is
decomposed into bins. Anchor points are generated in each bin to represent
global shape features. A convolution can then be performed to transform the
points and anchor features into final rotation-invariant features. We conduct
several experiments on point cloud classification, part segmentation, shape
retrieval, and normals estimation to evaluate our convolution, which achieves
state-of-the-art accuracy under challenging rotations.
|
[
{
"created": "Fri, 7 Aug 2020 04:33:27 GMT",
"version": "v1"
}
] |
2020-08-10
|
[
[
"Zhang",
"Zhiyuan",
""
],
[
"Hua",
"Binh-Son",
""
],
[
"Chen",
"Wei",
""
],
[
"Tian",
"Yibin",
""
],
[
"Yeung",
"Sai-Kit",
""
]
] |
Recent advances in deep learning for 3D point clouds have shown great promises in scene understanding tasks thanks to the introduction of convolution operators to consume 3D point clouds directly in a neural network. Point cloud data, however, could have arbitrary rotations, especially those acquired from 3D scanning. Recent works show that it is possible to design point cloud convolutions with rotation invariance property, but such methods generally do not perform as well as translation-invariant only convolution. We found that a key reason is that compared to point coordinates, rotation-invariant features consumed by point cloud convolution are not as distinctive. To address this problem, we propose a novel convolution operator that enhances feature distinction by integrating global context information from the input point cloud to the convolution. To this end, a globally weighted local reference frame is constructed in each point neighborhood in which the local point set is decomposed into bins. Anchor points are generated in each bin to represent global shape features. A convolution can then be performed to transform the points and anchor features into final rotation-invariant features. We conduct several experiments on point cloud classification, part segmentation, shape retrieval, and normals estimation to evaluate our convolution, which achieves state-of-the-art accuracy under challenging rotations.
|
1505.01606
|
Harsh Thakkar
|
Harsh Thakkar, Ganesh Iyer, Prasenjit Majumder
|
A comparative study of approaches in user-centered health information
retrieval
|
6 pages, 2 figures, 1 table
| null | null | null |
cs.IR
|
http://creativecommons.org/licenses/by-nc-sa/3.0/
|
In this paper, we survey various user-centered or context-based biomedical
health information retrieval systems. We present and discuss the performance of
systems submitted in CLEF eHealth 2014 Task 3 for this purpose. We classify and
focus on comparing the two most prevalent retrieval models in biomedical
information retrieval namely: Language Model (LM) and Vector Space Model (VSM).
We also report on the effectiveness of using external medical resources and
ontologies like MeSH, Metamap, UMLS, etc. We observed that the L.M. based
retrieval systems outperform VSM based systems on various fronts. From the
results we conclude that the state-of-art system scores for MAP was 0.4146,
P@10 was 0.7560 and NDCG@10 was 0.7445, respectively. All of these score were
reported by systems built on language modelling approaches.
|
[
{
"created": "Thu, 7 May 2015 07:32:33 GMT",
"version": "v1"
}
] |
2015-05-08
|
[
[
"Thakkar",
"Harsh",
""
],
[
"Iyer",
"Ganesh",
""
],
[
"Majumder",
"Prasenjit",
""
]
] |
In this paper, we survey various user-centered or context-based biomedical health information retrieval systems. We present and discuss the performance of systems submitted in CLEF eHealth 2014 Task 3 for this purpose. We classify and focus on comparing the two most prevalent retrieval models in biomedical information retrieval namely: Language Model (LM) and Vector Space Model (VSM). We also report on the effectiveness of using external medical resources and ontologies like MeSH, Metamap, UMLS, etc. We observed that the L.M. based retrieval systems outperform VSM based systems on various fronts. From the results we conclude that the state-of-art system scores for MAP was 0.4146, P@10 was 0.7560 and NDCG@10 was 0.7445, respectively. All of these score were reported by systems built on language modelling approaches.
|
0802.1412
|
Mahesh Pal Dr.
|
Mahesh Pal
|
Extreme Learning Machine for land cover classification
|
6 pages, mapindia 2008 conference
| null |
10.1080/01431160902788636
| null |
cs.NE cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper explores the potential of extreme learning machine based
supervised classification algorithm for land cover classification. In
comparison to a backpropagation neural network, which requires setting of
several user-defined parameters and may produce local minima, extreme learning
machine require setting of one parameter and produce a unique solution. ETM+
multispectral data set (England) was used to judge the suitability of extreme
learning machine for remote sensing classifications. A back propagation neural
network was used to compare its performance in term of classification accuracy
and computational cost. Results suggest that the extreme learning machine
perform equally well to back propagation neural network in term of
classification accuracy with this data set. The computational cost using
extreme learning machine is very small in comparison to back propagation neural
network.
|
[
{
"created": "Mon, 11 Feb 2008 11:12:06 GMT",
"version": "v1"
}
] |
2019-07-02
|
[
[
"Pal",
"Mahesh",
""
]
] |
This paper explores the potential of extreme learning machine based supervised classification algorithm for land cover classification. In comparison to a backpropagation neural network, which requires setting of several user-defined parameters and may produce local minima, extreme learning machine require setting of one parameter and produce a unique solution. ETM+ multispectral data set (England) was used to judge the suitability of extreme learning machine for remote sensing classifications. A back propagation neural network was used to compare its performance in term of classification accuracy and computational cost. Results suggest that the extreme learning machine perform equally well to back propagation neural network in term of classification accuracy with this data set. The computational cost using extreme learning machine is very small in comparison to back propagation neural network.
|
1510.04411
|
Harsh Taneja
|
Angela Xiao Wu, Harsh Taneja
|
Reimagining Internet Geographies: A User-Centric Ethnological Mapping of
the World Wide Web
|
Wu, Angela, Xiao & Taneja. H. (Forthcoming.) Reimagining Internet
Geographies: A User-Centric Ethnological Mapping of the World Wide Web.
Journal of Computer Mediated Communication. Both Authors Contributed Equally
to the Manuscript
| null |
10.1111/jcc4.12157
| null |
cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a new user-centric imagery of the WWW that foregrounds local usage
and its shaping forces, in contrast to existing imageries that prioritize
Internet infrastructure. We construct ethnological maps of WWW usage through a
network analysis of shared global traffic between 1000 most popular websites at
three time points and develop granular measures for exploring global
participation in online communication. Our results reveal the significant
growth and thickening of online regional cultures associated with the global
South. We draw attention to how local cultural identity, affirmative state
intervention and economic contexts shape regional cultures on the global WWW.
|
[
{
"created": "Thu, 15 Oct 2015 06:13:56 GMT",
"version": "v1"
},
{
"created": "Sun, 15 Nov 2015 06:47:17 GMT",
"version": "v2"
}
] |
2016-03-11
|
[
[
"Wu",
"Angela Xiao",
""
],
[
"Taneja",
"Harsh",
""
]
] |
We propose a new user-centric imagery of the WWW that foregrounds local usage and its shaping forces, in contrast to existing imageries that prioritize Internet infrastructure. We construct ethnological maps of WWW usage through a network analysis of shared global traffic between 1000 most popular websites at three time points and develop granular measures for exploring global participation in online communication. Our results reveal the significant growth and thickening of online regional cultures associated with the global South. We draw attention to how local cultural identity, affirmative state intervention and economic contexts shape regional cultures on the global WWW.
|
2010.15908
|
Shehtab Zaman
|
Shehtab Zaman, Christopher Owen, Kenneth Chiu, Michael Lawler
|
Graph Neural Network for Metal Organic Framework Potential Energy
Approximation
|
Accepted for presentation at the Machine Learning for Molecules
Workshop at NeurIPS 2020
| null | null | null |
cs.LG cond-mat.mtrl-sci
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Metal-organic frameworks (MOFs) are nanoporous compounds composed of metal
ions and organic linkers. MOFs play an important role in industrial
applications such as gas separation, gas purification, and electrolytic
catalysis. Important MOF properties such as potential energy are currently
computed via techniques such as density functional theory (DFT). Although DFT
provides accurate results, it is computationally costly. We propose a machine
learning approach for estimating the potential energy of candidate MOFs,
decomposing it into separate pair-wise atomic interactions using a graph neural
network. Such a technique will allow high-throughput screening of candidates
MOFs. We also generate a database of 50,000 spatial configurations and
high-quality potential energy values using DFT.
|
[
{
"created": "Thu, 29 Oct 2020 19:47:44 GMT",
"version": "v1"
}
] |
2020-11-02
|
[
[
"Zaman",
"Shehtab",
""
],
[
"Owen",
"Christopher",
""
],
[
"Chiu",
"Kenneth",
""
],
[
"Lawler",
"Michael",
""
]
] |
Metal-organic frameworks (MOFs) are nanoporous compounds composed of metal ions and organic linkers. MOFs play an important role in industrial applications such as gas separation, gas purification, and electrolytic catalysis. Important MOF properties such as potential energy are currently computed via techniques such as density functional theory (DFT). Although DFT provides accurate results, it is computationally costly. We propose a machine learning approach for estimating the potential energy of candidate MOFs, decomposing it into separate pair-wise atomic interactions using a graph neural network. Such a technique will allow high-throughput screening of candidates MOFs. We also generate a database of 50,000 spatial configurations and high-quality potential energy values using DFT.
|
2402.09052
|
Yutaro Yamada
|
Yutaro Yamada, Khyathi Chandu, Yuchen Lin, Jack Hessel, Ilker
Yildirim, Yejin Choi
|
L3GO: Language Agents with Chain-of-3D-Thoughts for Generating
Unconventional Objects
| null | null | null | null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Diffusion-based image generation models such as DALL-E 3 and Stable
Diffusion-XL demonstrate remarkable capabilities in generating images with
realistic and unique compositions. Yet, these models are not robust in
precisely reasoning about physical and spatial configurations of objects,
especially when instructed with unconventional, thereby out-of-distribution
descriptions, such as "a chair with five legs". In this paper, we propose a
language agent with chain-of-3D-thoughts (L3GO), an inference-time approach
that can reason about part-based 3D mesh generation of unconventional objects
that current data-driven diffusion models struggle with. More concretely, we
use large language models as agents to compose a desired object via
trial-and-error within the 3D simulation environment. To facilitate our
investigation, we develop a new benchmark, Unconventionally Feasible Objects
(UFO), as well as SimpleBlenv, a wrapper environment built on top of Blender
where language agents can build and compose atomic building blocks via API
calls. Human and automatic GPT-4V evaluations show that our approach surpasses
the standard GPT-4 and other language agents (e.g., ReAct and Reflexion) for 3D
mesh generation on ShapeNet. Moreover, when tested on our UFO benchmark, our
approach outperforms other state-of-the-art text-to-2D image and text-to-3D
models based on human evaluation.
|
[
{
"created": "Wed, 14 Feb 2024 09:51:05 GMT",
"version": "v1"
}
] |
2024-02-15
|
[
[
"Yamada",
"Yutaro",
""
],
[
"Chandu",
"Khyathi",
""
],
[
"Lin",
"Yuchen",
""
],
[
"Hessel",
"Jack",
""
],
[
"Yildirim",
"Ilker",
""
],
[
"Choi",
"Yejin",
""
]
] |
Diffusion-based image generation models such as DALL-E 3 and Stable Diffusion-XL demonstrate remarkable capabilities in generating images with realistic and unique compositions. Yet, these models are not robust in precisely reasoning about physical and spatial configurations of objects, especially when instructed with unconventional, thereby out-of-distribution descriptions, such as "a chair with five legs". In this paper, we propose a language agent with chain-of-3D-thoughts (L3GO), an inference-time approach that can reason about part-based 3D mesh generation of unconventional objects that current data-driven diffusion models struggle with. More concretely, we use large language models as agents to compose a desired object via trial-and-error within the 3D simulation environment. To facilitate our investigation, we develop a new benchmark, Unconventionally Feasible Objects (UFO), as well as SimpleBlenv, a wrapper environment built on top of Blender where language agents can build and compose atomic building blocks via API calls. Human and automatic GPT-4V evaluations show that our approach surpasses the standard GPT-4 and other language agents (e.g., ReAct and Reflexion) for 3D mesh generation on ShapeNet. Moreover, when tested on our UFO benchmark, our approach outperforms other state-of-the-art text-to-2D image and text-to-3D models based on human evaluation.
|
1610.08250
|
Kamal Nayan Reddy Challa
|
Kamal Nayan Reddy Challa, Venkata Sasank Pagolu, Ganapati Panda,
Babita Majhi
|
An Improved Approach for Prediction of Parkinson's Disease using Machine
Learning Techniques
|
Conference Paper
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Parkinson's disease (PD) is one of the major public health problems in the
world. It is a well-known fact that around one million people suffer from
Parkinson's disease in the United States whereas the number of people suffering
from Parkinson's disease worldwide is around 5 million. Thus, it is important
to predict Parkinson's disease in early stages so that early plan for the
necessary treatment can be made. People are mostly familiar with the motor
symptoms of Parkinson's disease, however, an increasing amount of research is
being done to predict the Parkinson's disease from non-motor symptoms that
precede the motor ones. If an early and reliable prediction is possible then a
patient can get a proper treatment at the right time. Nonmotor symptoms
considered are Rapid Eye Movement (REM) sleep Behaviour Disorder (RBD) and
olfactory loss. Developing machine learning models that can help us in
predicting the disease can play a vital role in early prediction. In this
paper, we extend a work which used the non-motor features such as RBD and
olfactory loss. Along with this the extended work also uses important
biomarkers. In this paper, we try to model this classifier using different
machine learning models that have not been used before. We developed automated
diagnostic models using Multilayer Perceptron, BayesNet, Random Forest and
Boosted Logistic Regression. It has been observed that Boosted Logistic
Regression provides the best performance with an impressive accuracy of 97.159
% and the area under the ROC curve was 98.9%. Thus, it is concluded that these
models can be used for early prediction of Parkinson's disease.
|
[
{
"created": "Wed, 26 Oct 2016 09:34:39 GMT",
"version": "v1"
}
] |
2016-10-27
|
[
[
"Challa",
"Kamal Nayan Reddy",
""
],
[
"Pagolu",
"Venkata Sasank",
""
],
[
"Panda",
"Ganapati",
""
],
[
"Majhi",
"Babita",
""
]
] |
Parkinson's disease (PD) is one of the major public health problems in the world. It is a well-known fact that around one million people suffer from Parkinson's disease in the United States whereas the number of people suffering from Parkinson's disease worldwide is around 5 million. Thus, it is important to predict Parkinson's disease in early stages so that early plan for the necessary treatment can be made. People are mostly familiar with the motor symptoms of Parkinson's disease, however, an increasing amount of research is being done to predict the Parkinson's disease from non-motor symptoms that precede the motor ones. If an early and reliable prediction is possible then a patient can get a proper treatment at the right time. Nonmotor symptoms considered are Rapid Eye Movement (REM) sleep Behaviour Disorder (RBD) and olfactory loss. Developing machine learning models that can help us in predicting the disease can play a vital role in early prediction. In this paper, we extend a work which used the non-motor features such as RBD and olfactory loss. Along with this the extended work also uses important biomarkers. In this paper, we try to model this classifier using different machine learning models that have not been used before. We developed automated diagnostic models using Multilayer Perceptron, BayesNet, Random Forest and Boosted Logistic Regression. It has been observed that Boosted Logistic Regression provides the best performance with an impressive accuracy of 97.159 % and the area under the ROC curve was 98.9%. Thus, it is concluded that these models can be used for early prediction of Parkinson's disease.
|
2009.14572
|
Sandro Lera
|
Delilah Donick and Sandro Claudio Lera
|
Uncovering Feature Interdependencies in High-Noise Environments with
Stepwise Lookahead Decision Forests
| null | null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Conventionally, random forests are built from "greedy" decision trees which
each consider only one split at a time during their construction. The
sub-optimality of greedy implementation has been well-known, yet mainstream
adoption of more sophisticated tree building algorithms has been lacking. We
examine under what circumstances an implementation of less greedy decision
trees actually yields outperformance. To this end, a "stepwise lookahead"
variation of the random forest algorithm is presented for its ability to better
uncover binary feature interdependencies. In contrast to the greedy approach,
the decision trees included in this random forest algorithm, each
simultaneously consider three split nodes in tiers of depth two. It is
demonstrated on synthetic data and financial price time series that the
lookahead version significantly outperforms the greedy one when (a) certain
non-linear relationships between feature-pairs are present and (b) if the
signal-to-noise ratio is particularly low. A long-short trading strategy for
copper futures is then backtested by training both greedy and stepwise
lookahead random forests to predict the signs of daily price returns. The
resulting superior performance of the lookahead algorithm is at least partially
explained by the presence of "XOR-like" relationships between long-term and
short-term technical indicators. More generally, across all examined datasets,
when no such relationships between features are present, performance across
random forests is similar. Given its enhanced ability to understand the
feature-interdependencies present in complex systems, this lookahead variation
is a useful extension to the toolkit of data scientists, in particular for
financial machine learning, where conditions (a) and (b) are typically met.
|
[
{
"created": "Wed, 30 Sep 2020 11:31:10 GMT",
"version": "v1"
},
{
"created": "Thu, 1 Oct 2020 01:54:54 GMT",
"version": "v2"
},
{
"created": "Mon, 5 Oct 2020 14:11:33 GMT",
"version": "v3"
},
{
"created": "Mon, 1 Feb 2021 04:21:00 GMT",
"version": "v4"
},
{
"created": "Wed, 31 Mar 2021 14:24:26 GMT",
"version": "v5"
}
] |
2021-04-01
|
[
[
"Donick",
"Delilah",
""
],
[
"Lera",
"Sandro Claudio",
""
]
] |
Conventionally, random forests are built from "greedy" decision trees which each consider only one split at a time during their construction. The sub-optimality of greedy implementation has been well-known, yet mainstream adoption of more sophisticated tree building algorithms has been lacking. We examine under what circumstances an implementation of less greedy decision trees actually yields outperformance. To this end, a "stepwise lookahead" variation of the random forest algorithm is presented for its ability to better uncover binary feature interdependencies. In contrast to the greedy approach, the decision trees included in this random forest algorithm, each simultaneously consider three split nodes in tiers of depth two. It is demonstrated on synthetic data and financial price time series that the lookahead version significantly outperforms the greedy one when (a) certain non-linear relationships between feature-pairs are present and (b) if the signal-to-noise ratio is particularly low. A long-short trading strategy for copper futures is then backtested by training both greedy and stepwise lookahead random forests to predict the signs of daily price returns. The resulting superior performance of the lookahead algorithm is at least partially explained by the presence of "XOR-like" relationships between long-term and short-term technical indicators. More generally, across all examined datasets, when no such relationships between features are present, performance across random forests is similar. Given its enhanced ability to understand the feature-interdependencies present in complex systems, this lookahead variation is a useful extension to the toolkit of data scientists, in particular for financial machine learning, where conditions (a) and (b) are typically met.
|
2403.17103
|
Remy Sabathier
|
Remy Sabathier, Niloy J. Mitra, David Novotny
|
Animal Avatars: Reconstructing Animatable 3D Animals from Casual Videos
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We present a method to build animatable dog avatars from monocular videos.
This is challenging as animals display a range of (unpredictable) non-rigid
movements and have a variety of appearance details (e.g., fur, spots, tails).
We develop an approach that links the video frames via a 4D solution that
jointly solves for animal's pose variation, and its appearance (in a canonical
pose). To this end, we significantly improve the quality of template-based
shape fitting by endowing the SMAL parametric model with Continuous Surface
Embeddings, which brings image-to-mesh reprojection constaints that are denser,
and thus stronger, than the previously used sparse semantic keypoint
correspondences. To model appearance, we propose an implicit duplex-mesh
texture that is defined in the canonical pose, but can be deformed using SMAL
pose coefficients and later rendered to enforce a photometric compatibility
with the input video frames. On the challenging CoP3D and APTv2 datasets, we
demonstrate superior results (both in terms of pose estimates and predicted
appearance) to existing template-free (RAC) and template-based approaches
(BARC, BITE).
|
[
{
"created": "Mon, 25 Mar 2024 18:41:43 GMT",
"version": "v1"
}
] |
2024-03-27
|
[
[
"Sabathier",
"Remy",
""
],
[
"Mitra",
"Niloy J.",
""
],
[
"Novotny",
"David",
""
]
] |
We present a method to build animatable dog avatars from monocular videos. This is challenging as animals display a range of (unpredictable) non-rigid movements and have a variety of appearance details (e.g., fur, spots, tails). We develop an approach that links the video frames via a 4D solution that jointly solves for animal's pose variation, and its appearance (in a canonical pose). To this end, we significantly improve the quality of template-based shape fitting by endowing the SMAL parametric model with Continuous Surface Embeddings, which brings image-to-mesh reprojection constaints that are denser, and thus stronger, than the previously used sparse semantic keypoint correspondences. To model appearance, we propose an implicit duplex-mesh texture that is defined in the canonical pose, but can be deformed using SMAL pose coefficients and later rendered to enforce a photometric compatibility with the input video frames. On the challenging CoP3D and APTv2 datasets, we demonstrate superior results (both in terms of pose estimates and predicted appearance) to existing template-free (RAC) and template-based approaches (BARC, BITE).
|
1907.04028
|
Bin Yang
|
Sean Bin Yang, Bin Yang
|
PathRank: A Multi-Task Learning Framework to Rank Paths in Spatial
Networks
| null | null | null | null |
cs.LG cs.DB stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Modern navigation services often provide multiple paths connecting the same
source and destination for users to select. Hence, ranking such paths becomes
increasingly important, which directly affects the service quality. We present
PathRank, a data-driven framework for ranking paths based on historical
trajectories using multi-task learning. If a trajectory used path P from source
s to destination d, PathRank considers this as an evidence that P is preferred
over all other paths from s to d. Thus, a path that is similar to P should have
a larger ranking score than a path that is dissimilar to P. Based on this
intuition, PathRank models path ranking as a regression problem, where each
path is associated with a ranking score.
To enable PathRank, we first propose an effective method to generate a
compact set of training data: for each trajectory, we generate a small set of
diversified paths. Next, we propose a multi-task learning framework to solve
the regression problem. In particular, a spatial network embedding is proposed
to embed each vertex to a feature vector by considering both road network
topology and spatial properties, such as distances and travel times. Since a
path is represented by a sequence of vertices, which is now a sequence of
feature vectors after embedding, recurrent neural network is applied to model
the sequence. The objective function is designed to consider errors on both
ranking scores and spatial properties, making the framework a multi-task
learning framework. Empirical studies on a substantial trajectory data set
offer insight into the designed properties of the proposed framework and
indicating that it is effective and practical.
|
[
{
"created": "Tue, 9 Jul 2019 07:45:55 GMT",
"version": "v1"
}
] |
2019-07-10
|
[
[
"Yang",
"Sean Bin",
""
],
[
"Yang",
"Bin",
""
]
] |
Modern navigation services often provide multiple paths connecting the same source and destination for users to select. Hence, ranking such paths becomes increasingly important, which directly affects the service quality. We present PathRank, a data-driven framework for ranking paths based on historical trajectories using multi-task learning. If a trajectory used path P from source s to destination d, PathRank considers this as an evidence that P is preferred over all other paths from s to d. Thus, a path that is similar to P should have a larger ranking score than a path that is dissimilar to P. Based on this intuition, PathRank models path ranking as a regression problem, where each path is associated with a ranking score. To enable PathRank, we first propose an effective method to generate a compact set of training data: for each trajectory, we generate a small set of diversified paths. Next, we propose a multi-task learning framework to solve the regression problem. In particular, a spatial network embedding is proposed to embed each vertex to a feature vector by considering both road network topology and spatial properties, such as distances and travel times. Since a path is represented by a sequence of vertices, which is now a sequence of feature vectors after embedding, recurrent neural network is applied to model the sequence. The objective function is designed to consider errors on both ranking scores and spatial properties, making the framework a multi-task learning framework. Empirical studies on a substantial trajectory data set offer insight into the designed properties of the proposed framework and indicating that it is effective and practical.
|
1804.10367
|
Araz Taeihagh
|
Hazel Si Min Lim and Araz Taeihagh
|
Autonomous Vehicles for Smart and Sustainable Cities: An In-Depth
Exploration of Privacy and Cybersecurity Implications
| null |
Energies 11, no. 5: 1062 (2018)
|
10.3390/en11051062
| null |
cs.CY cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Amidst rapid urban development, sustainable transportation solutions are
required to meet the increasing demands for mobility whilst mitigating the
potentially negative social, economic, and environmental impacts. This study
analyses autonomous vehicles (AVs) as a potential transportation solution for
smart and sustainable development. We identified privacy and cybersecurity
risks of AVs as crucial to the development of smart and sustainable cities and
examined the steps taken by governments around the world to address these
risks. We highlight the literature that supports why AVs are essential for
smart and sustainable development. We then identify the aspects of privacy and
cybersecurity in AVs that are important for smart and sustainable development.
Lastly, we review the efforts taken by federal governments in the US, the UK,
China, Australia, Japan, Singapore, South Korea, Germany, France, and the EU,
and by US state governments to address AV-related privacy and cybersecurity
risks in-depth. Overall, the actions taken by governments to address privacy
risks are mainly in the form of regulations or voluntary guidelines. To address
cybersecurity risks, governments have mostly resorted to regulations that are
not specific to AVs and are conducting research and fostering research
collaborations with the private sector.
|
[
{
"created": "Fri, 27 Apr 2018 07:29:34 GMT",
"version": "v1"
}
] |
2018-04-30
|
[
[
"Lim",
"Hazel Si Min",
""
],
[
"Taeihagh",
"Araz",
""
]
] |
Amidst rapid urban development, sustainable transportation solutions are required to meet the increasing demands for mobility whilst mitigating the potentially negative social, economic, and environmental impacts. This study analyses autonomous vehicles (AVs) as a potential transportation solution for smart and sustainable development. We identified privacy and cybersecurity risks of AVs as crucial to the development of smart and sustainable cities and examined the steps taken by governments around the world to address these risks. We highlight the literature that supports why AVs are essential for smart and sustainable development. We then identify the aspects of privacy and cybersecurity in AVs that are important for smart and sustainable development. Lastly, we review the efforts taken by federal governments in the US, the UK, China, Australia, Japan, Singapore, South Korea, Germany, France, and the EU, and by US state governments to address AV-related privacy and cybersecurity risks in-depth. Overall, the actions taken by governments to address privacy risks are mainly in the form of regulations or voluntary guidelines. To address cybersecurity risks, governments have mostly resorted to regulations that are not specific to AVs and are conducting research and fostering research collaborations with the private sector.
|
1807.11329
|
Yu Chen
|
Seyed Yahya Nikouei, Yu Chen, Alexander Aved, Erik Blasch
|
EIQIS: Toward an Event-Oriented Indexable and Queryable Intelligent
Surveillance System
| null | null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Edge computing provides the ability to link distributor users for multimedia
content, while retaining the power of significant data storage and access at a
centralized computer. Two requirements of significance include: what
information show be processed at the edge and how the content should be stored.
Answers to these questions require a combination of query-based search, access,
and response as well as indexed-based processing, storage, and distribution. A
measure of intelligence is not what is known, but is recalled, hence, future
edge intelligence must provide recalled information for dynamic response. In
this paper, a novel event-oriented indexable and queryable intelligent
surveillance (EIQIS) system is introduced leveraging the on-site edge devices
to collect the information sensed in format of frames and extracts useful
features to enhance situation awareness. The design principles are discussed
and a preliminary proof-of-concept prototype is built that validated the
feasibility of the proposed idea.
|
[
{
"created": "Mon, 30 Jul 2018 13:04:57 GMT",
"version": "v1"
}
] |
2018-07-31
|
[
[
"Nikouei",
"Seyed Yahya",
""
],
[
"Chen",
"Yu",
""
],
[
"Aved",
"Alexander",
""
],
[
"Blasch",
"Erik",
""
]
] |
Edge computing provides the ability to link distributor users for multimedia content, while retaining the power of significant data storage and access at a centralized computer. Two requirements of significance include: what information show be processed at the edge and how the content should be stored. Answers to these questions require a combination of query-based search, access, and response as well as indexed-based processing, storage, and distribution. A measure of intelligence is not what is known, but is recalled, hence, future edge intelligence must provide recalled information for dynamic response. In this paper, a novel event-oriented indexable and queryable intelligent surveillance (EIQIS) system is introduced leveraging the on-site edge devices to collect the information sensed in format of frames and extracts useful features to enhance situation awareness. The design principles are discussed and a preliminary proof-of-concept prototype is built that validated the feasibility of the proposed idea.
|
1603.00845
|
Xavier Gir\'o-i-Nieto
|
Junting Pan, Kevin McGuinness, Elisa Sayrol, Noel O'Connor and Xavier
Giro-i-Nieto
|
Shallow and Deep Convolutional Networks for Saliency Prediction
|
Preprint of the paper accepted at 2016 IEEE Conference on Computer
Vision and Pattern Recognition (CVPR). Source code and models available at
https://github.com/imatge-upc/saliency-2016-cvpr. Junting Pan and Kevin
McGuinness contributed equally to this work
| null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The prediction of salient areas in images has been traditionally addressed
with hand-crafted features based on neuroscience principles. This paper,
however, addresses the problem with a completely data-driven approach by
training a convolutional neural network (convnet). The learning process is
formulated as a minimization of a loss function that measures the Euclidean
distance of the predicted saliency map with the provided ground truth. The
recent publication of large datasets of saliency prediction has provided enough
data to train end-to-end architectures that are both fast and accurate. Two
designs are proposed: a shallow convnet trained from scratch, and a another
deeper solution whose first three layers are adapted from another network
trained for classification. To the authors knowledge, these are the first
end-to-end CNNs trained and tested for the purpose of saliency prediction.
|
[
{
"created": "Wed, 2 Mar 2016 19:54:02 GMT",
"version": "v1"
}
] |
2016-03-03
|
[
[
"Pan",
"Junting",
""
],
[
"McGuinness",
"Kevin",
""
],
[
"Sayrol",
"Elisa",
""
],
[
"O'Connor",
"Noel",
""
],
[
"Giro-i-Nieto",
"Xavier",
""
]
] |
The prediction of salient areas in images has been traditionally addressed with hand-crafted features based on neuroscience principles. This paper, however, addresses the problem with a completely data-driven approach by training a convolutional neural network (convnet). The learning process is formulated as a minimization of a loss function that measures the Euclidean distance of the predicted saliency map with the provided ground truth. The recent publication of large datasets of saliency prediction has provided enough data to train end-to-end architectures that are both fast and accurate. Two designs are proposed: a shallow convnet trained from scratch, and a another deeper solution whose first three layers are adapted from another network trained for classification. To the authors knowledge, these are the first end-to-end CNNs trained and tested for the purpose of saliency prediction.
|
2007.10175
|
Jordan J. Bird
|
Jordan J. Bird, Diego R. Faria, Cristiano Premebida, Anik\'o Ek\'art,
George Vogiatzis
|
Look and Listen: A Multi-modality Late Fusion Approach to Scene
Classification for Autonomous Machines
|
6 pages, 10 figures, 3 tables
| null | null | null |
cs.CV cs.LG cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The novelty of this study consists in a multi-modality approach to scene
classification, where image and audio complement each other in a process of
deep late fusion. The approach is demonstrated on a difficult classification
problem, consisting of two synchronised and balanced datasets of 16,000 data
objects, encompassing 4.4 hours of video of 8 environments with varying degrees
of similarity. We first extract video frames and accompanying audio at one
second intervals. The image and the audio datasets are first classified
independently, using a fine-tuned VGG16 and an evolutionary optimised deep
neural network, with accuracies of 89.27% and 93.72%, respectively. This is
followed by late fusion of the two neural networks to enable a higher order
function, leading to accuracy of 96.81% in this multi-modality classifier with
synchronised video frames and audio clips. The tertiary neural network
implemented for late fusion outperforms classical state-of-the-art classifiers
by around 3% when the two primary networks are considered as feature
generators. We show that situations where a single-modality may be confused by
anomalous data points are now corrected through an emerging higher order
integration. Prominent examples include a water feature in a city misclassified
as a river by the audio classifier alone and a densely crowded street
misclassified as a forest by the image classifier alone. Both are examples
which are correctly classified by our multi-modality approach.
|
[
{
"created": "Sat, 11 Jul 2020 16:47:05 GMT",
"version": "v1"
}
] |
2020-07-21
|
[
[
"Bird",
"Jordan J.",
""
],
[
"Faria",
"Diego R.",
""
],
[
"Premebida",
"Cristiano",
""
],
[
"Ekárt",
"Anikó",
""
],
[
"Vogiatzis",
"George",
""
]
] |
The novelty of this study consists in a multi-modality approach to scene classification, where image and audio complement each other in a process of deep late fusion. The approach is demonstrated on a difficult classification problem, consisting of two synchronised and balanced datasets of 16,000 data objects, encompassing 4.4 hours of video of 8 environments with varying degrees of similarity. We first extract video frames and accompanying audio at one second intervals. The image and the audio datasets are first classified independently, using a fine-tuned VGG16 and an evolutionary optimised deep neural network, with accuracies of 89.27% and 93.72%, respectively. This is followed by late fusion of the two neural networks to enable a higher order function, leading to accuracy of 96.81% in this multi-modality classifier with synchronised video frames and audio clips. The tertiary neural network implemented for late fusion outperforms classical state-of-the-art classifiers by around 3% when the two primary networks are considered as feature generators. We show that situations where a single-modality may be confused by anomalous data points are now corrected through an emerging higher order integration. Prominent examples include a water feature in a city misclassified as a river by the audio classifier alone and a densely crowded street misclassified as a forest by the image classifier alone. Both are examples which are correctly classified by our multi-modality approach.
|
1412.5616
|
Salvatore Talarico
|
Don Torrieri, Salvatore Talarico, and Matthew C. Valenti
|
Performance Analysis of Geographic Routing Protocols in Ad Hoc Networks
|
6 pages, 7 figures, IEEE Military Commun. Conf. (MILCOM), 2014. arXiv
admin note: substantial text overlap with arXiv:1309.3582
| null |
10.1109/MILCOM.2014.183
| null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Geographic routing protocols greatly reduce the requirements of topology
storage and provide flexibility in the accommodation of the dynamic behavior of
ad hoc networks. This paper presents performance evaluations and comparisons of
two geographic routing protocols and the popular AODV protocol. The trade-offs
among the average path reliabilities, average conditional delays, average
conditional number of hops, and area spectral efficiencies and the effects of
various parameters are illustrated for finite ad hoc networks with randomly
placed mobiles. This paper uses a dual method of closed-form analysis and
simple simulation that is applicable to most routing protocols and provides a
much more realistic performance evaluation than has previously been possible.
Some features included in the new analysis are shadowing, exclusion and guard
zones, and distance-dependent fading.
|
[
{
"created": "Mon, 8 Dec 2014 07:53:40 GMT",
"version": "v1"
}
] |
2014-12-19
|
[
[
"Torrieri",
"Don",
""
],
[
"Talarico",
"Salvatore",
""
],
[
"Valenti",
"Matthew C.",
""
]
] |
Geographic routing protocols greatly reduce the requirements of topology storage and provide flexibility in the accommodation of the dynamic behavior of ad hoc networks. This paper presents performance evaluations and comparisons of two geographic routing protocols and the popular AODV protocol. The trade-offs among the average path reliabilities, average conditional delays, average conditional number of hops, and area spectral efficiencies and the effects of various parameters are illustrated for finite ad hoc networks with randomly placed mobiles. This paper uses a dual method of closed-form analysis and simple simulation that is applicable to most routing protocols and provides a much more realistic performance evaluation than has previously been possible. Some features included in the new analysis are shadowing, exclusion and guard zones, and distance-dependent fading.
|
2303.12524
|
Luigi Capogrosso
|
Luigi Capogrosso, Federico Cunico, Michele Lora, Marco Cristani,
Franco Fummi, Davide Quaglia
|
Split-Et-Impera: A Framework for the Design of Distributed Deep Learning
Applications
|
26th International Symposium on Design and Diagnostics of Electronic
Circuits and Systems (DDECS)
| null | null | null |
cs.DC cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Many recent pattern recognition applications rely on complex distributed
architectures in which sensing and computational nodes interact together
through a communication network. Deep neural networks (DNNs) play an important
role in this scenario, furnishing powerful decision mechanisms, at the price of
a high computational effort. Consequently, powerful state-of-the-art DNNs are
frequently split over various computational nodes, e.g., a first part stays on
an embedded device and the rest on a server. Deciding where to split a DNN is a
challenge in itself, making the design of deep learning applications even more
complicated. Therefore, we propose Split-Et-Impera, a novel and practical
framework that i) determines the set of the best-split points of a neural
network based on deep network interpretability principles without performing a
tedious try-and-test approach, ii) performs a communication-aware simulation
for the rapid evaluation of different neural network rearrangements, and iii)
suggests the best match between the quality of service requirements of the
application and the performance in terms of accuracy and latency time.
|
[
{
"created": "Wed, 22 Mar 2023 13:00:00 GMT",
"version": "v1"
}
] |
2023-03-23
|
[
[
"Capogrosso",
"Luigi",
""
],
[
"Cunico",
"Federico",
""
],
[
"Lora",
"Michele",
""
],
[
"Cristani",
"Marco",
""
],
[
"Fummi",
"Franco",
""
],
[
"Quaglia",
"Davide",
""
]
] |
Many recent pattern recognition applications rely on complex distributed architectures in which sensing and computational nodes interact together through a communication network. Deep neural networks (DNNs) play an important role in this scenario, furnishing powerful decision mechanisms, at the price of a high computational effort. Consequently, powerful state-of-the-art DNNs are frequently split over various computational nodes, e.g., a first part stays on an embedded device and the rest on a server. Deciding where to split a DNN is a challenge in itself, making the design of deep learning applications even more complicated. Therefore, we propose Split-Et-Impera, a novel and practical framework that i) determines the set of the best-split points of a neural network based on deep network interpretability principles without performing a tedious try-and-test approach, ii) performs a communication-aware simulation for the rapid evaluation of different neural network rearrangements, and iii) suggests the best match between the quality of service requirements of the application and the performance in terms of accuracy and latency time.
|
2008.03640
|
Feng Xia
|
Jie Hou, Hanxiao Pan, Teng Guo, Ivan Lee, Xiangjie Kong, Feng Xia
|
Prediction Methods and Applications in the Science of Science: A Survey
|
17 pages, 6 figures
|
Computer Science Review, Volume 34, November 2019, 100197
|
10.1016/j.cosrev.2019.100197
| null |
cs.SI cs.DL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Science of science has become a popular topic that attracts great attentions
from the research community. The development of data analytics technologies and
the readily available scholarly data enable the exploration of data-driven
prediction, which plays a pivotal role in finding the trend of scientific
impact. In this paper, we analyse methods and applications in data-driven
prediction in the science of science, and discuss their significance. First, we
introduce the background and review the current state of the science of
science. Second, we review data-driven prediction based on paper citation
count, and investigate research issues in this area. Then, we discuss methods
to predict scholar impact, and we analyse different approaches to promote the
scholarly collaboration in the collaboration network. This paper also discusses
open issues and existing challenges, and suggests potential research
directions.
|
[
{
"created": "Sun, 9 Aug 2020 03:42:43 GMT",
"version": "v1"
}
] |
2020-08-11
|
[
[
"Hou",
"Jie",
""
],
[
"Pan",
"Hanxiao",
""
],
[
"Guo",
"Teng",
""
],
[
"Lee",
"Ivan",
""
],
[
"Kong",
"Xiangjie",
""
],
[
"Xia",
"Feng",
""
]
] |
Science of science has become a popular topic that attracts great attentions from the research community. The development of data analytics technologies and the readily available scholarly data enable the exploration of data-driven prediction, which plays a pivotal role in finding the trend of scientific impact. In this paper, we analyse methods and applications in data-driven prediction in the science of science, and discuss their significance. First, we introduce the background and review the current state of the science of science. Second, we review data-driven prediction based on paper citation count, and investigate research issues in this area. Then, we discuss methods to predict scholar impact, and we analyse different approaches to promote the scholarly collaboration in the collaboration network. This paper also discusses open issues and existing challenges, and suggests potential research directions.
|
1804.00397
|
Jaqueline De Oliveira J F Oliveira
|
Josemar Alves Caetano, Jaqueline Faria de Oliveira, Helder Seixas
Lima, Humberto T. Marques-Neto, Gabriel Magno, Wagner Meira Jr, Virg\'ilio A.
F. Almeida
|
Analyzing and characterizing political discussions in WhatsApp public
groups
|
10 pages, 12 figures
| null | null | null |
cs.SI cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a thorough characterization of what we believe to be the first
significant analysis of the behavior of groups in WhatsApp in the scientific
literature. Our characterization of over 270,000 messages and about 7,000 users
spanning a 28-day period is done at three different layers. The message layer
focuses on individual messages, each of which is the result of specific posts
performed by a user. The user layer characterizes the user actions while
interacting with a group. The group layer characterizes the aggregate message
patterns of all users that participate in a group. We analyze 81 public groups
in WhatsApp and classify them into two categories, political and non-political
groups according to keywords associated with each group. Our contributions are
two-fold. First, we introduce a framework and a number of metrics to
characterize the behavior of communication groups in mobile messaging systems
such as WhatsApp. Second, our analysis underscores a Zipf-like profile for user
messages in political groups. Also, our analysis reveals that Whatsapp messages
are multimedia, with a combination of different forms of content. Multimedia
content (i.e., audio, image, and video) and emojis are present in 20% and 11.2%
of all messages respectively. Political groups use more text messages than
non-political groups. Second, we characterize novel features that represent the
behavior of a public group, with multiple conversational turns between key
members, with the participation of other members of the group.
|
[
{
"created": "Mon, 2 Apr 2018 05:10:36 GMT",
"version": "v1"
}
] |
2018-04-03
|
[
[
"Caetano",
"Josemar Alves",
""
],
[
"de Oliveira",
"Jaqueline Faria",
""
],
[
"Lima",
"Helder Seixas",
""
],
[
"Marques-Neto",
"Humberto T.",
""
],
[
"Magno",
"Gabriel",
""
],
[
"Meira",
"Wagner",
"Jr"
],
[
"Almeida",
"Virgílio A. F.",
""
]
] |
We present a thorough characterization of what we believe to be the first significant analysis of the behavior of groups in WhatsApp in the scientific literature. Our characterization of over 270,000 messages and about 7,000 users spanning a 28-day period is done at three different layers. The message layer focuses on individual messages, each of which is the result of specific posts performed by a user. The user layer characterizes the user actions while interacting with a group. The group layer characterizes the aggregate message patterns of all users that participate in a group. We analyze 81 public groups in WhatsApp and classify them into two categories, political and non-political groups according to keywords associated with each group. Our contributions are two-fold. First, we introduce a framework and a number of metrics to characterize the behavior of communication groups in mobile messaging systems such as WhatsApp. Second, our analysis underscores a Zipf-like profile for user messages in political groups. Also, our analysis reveals that Whatsapp messages are multimedia, with a combination of different forms of content. Multimedia content (i.e., audio, image, and video) and emojis are present in 20% and 11.2% of all messages respectively. Political groups use more text messages than non-political groups. Second, we characterize novel features that represent the behavior of a public group, with multiple conversational turns between key members, with the participation of other members of the group.
|
1103.3319
|
EPTCS
|
Andrea Asperti (University of Bologna), Enrico Tassi (Microsoft
Research - INRIA Joint Centre)
|
Superposition as a logical glue
|
In Proceedings TYPES 2009, arXiv:1103.3111
|
EPTCS 53, 2011, pp. 1-15
|
10.4204/EPTCS.53.1
| null |
cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The typical mathematical language systematically exploits notational and
logical abuses whose resolution requires not just the knowledge of domain
specific notation and conventions, but not trivial skills in the given
mathematical discipline. A large part of this background knowledge is expressed
in form of equalities and isomorphisms, allowing mathematicians to freely move
between different incarnations of the same entity without even mentioning the
transformation. Providing ITP-systems with similar capabilities seems to be a
major way to improve their intelligence, and to ease the communication between
the user and the machine. The present paper discusses our experience of
integration of a superposition calculus within the Matita interactive prover,
providing in particular a very flexible, "smart" application tactic, and a
simple, innovative approach to automation.
|
[
{
"created": "Thu, 17 Mar 2011 00:19:29 GMT",
"version": "v1"
}
] |
2011-03-18
|
[
[
"Asperti",
"Andrea",
"",
"University of Bologna"
],
[
"Tassi",
"Enrico",
"",
"Microsoft\n Research - INRIA Joint Centre"
]
] |
The typical mathematical language systematically exploits notational and logical abuses whose resolution requires not just the knowledge of domain specific notation and conventions, but not trivial skills in the given mathematical discipline. A large part of this background knowledge is expressed in form of equalities and isomorphisms, allowing mathematicians to freely move between different incarnations of the same entity without even mentioning the transformation. Providing ITP-systems with similar capabilities seems to be a major way to improve their intelligence, and to ease the communication between the user and the machine. The present paper discusses our experience of integration of a superposition calculus within the Matita interactive prover, providing in particular a very flexible, "smart" application tactic, and a simple, innovative approach to automation.
|
2111.03263
|
Chenghong Bian
|
Chenghong Bian, Mingyu Yang, Chin-Wei Hsu, Hun-Seok Kim
|
Deep Learning Based Near-Orthogonal Superposition Code for Short Message
Transmission
|
6 pages, 7 figures
| null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Massive machine type communication (mMTC) has attracted new coding schemes
optimized for reliable short message transmission. In this paper, a novel deep
learning based near-orthogonal superposition (NOS) coding scheme is proposed
for reliable transmission of short messages in the additive white Gaussian
noise (AWGN) channel for mMTC applications. Similar to recent hyper-dimensional
modulation (HDM), the NOS encoder spreads the information bits to multiple
near-orthogonal high dimensional vectors to be combined (superimposed) into a
single vector for transmission. The NOS decoder first estimates the information
vectors and then performs a cyclic redundancy check (CRC)-assisted K-best
tree-search algorithm to further reduce the packet error rate. The proposed NOS
encoder and decoder are deep neural networks (DNNs) jointly trained as an auto
encoder and decoder pair to learn a new NOS coding scheme with near-orthogonal
codewords. Simulation results show the proposed deep learning-based NOS scheme
outperforms HDM and Polar code with CRC-aided list decoding for short(32-bit)
message transmission.
|
[
{
"created": "Fri, 5 Nov 2021 05:15:13 GMT",
"version": "v1"
},
{
"created": "Thu, 11 Nov 2021 16:20:10 GMT",
"version": "v2"
}
] |
2021-11-12
|
[
[
"Bian",
"Chenghong",
""
],
[
"Yang",
"Mingyu",
""
],
[
"Hsu",
"Chin-Wei",
""
],
[
"Kim",
"Hun-Seok",
""
]
] |
Massive machine type communication (mMTC) has attracted new coding schemes optimized for reliable short message transmission. In this paper, a novel deep learning based near-orthogonal superposition (NOS) coding scheme is proposed for reliable transmission of short messages in the additive white Gaussian noise (AWGN) channel for mMTC applications. Similar to recent hyper-dimensional modulation (HDM), the NOS encoder spreads the information bits to multiple near-orthogonal high dimensional vectors to be combined (superimposed) into a single vector for transmission. The NOS decoder first estimates the information vectors and then performs a cyclic redundancy check (CRC)-assisted K-best tree-search algorithm to further reduce the packet error rate. The proposed NOS encoder and decoder are deep neural networks (DNNs) jointly trained as an auto encoder and decoder pair to learn a new NOS coding scheme with near-orthogonal codewords. Simulation results show the proposed deep learning-based NOS scheme outperforms HDM and Polar code with CRC-aided list decoding for short(32-bit) message transmission.
|
1603.08244
|
Annina Bracher
|
Annina Bracher, Amos Lapidoth
|
Identification via the Broadcast Channel
|
83 pages, a shorter version is published in the IEEE Transactions on
Information Theory
|
IEEE Trans. Inf. Theory, vol. 63, no. 6, pp. 3480-3501, Jun. 2017
|
10.1109/TIT.2017.2674669
| null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The identification (ID) capacity region of the two-receiver broadcast channel
(BC) is shown to be the set of rate-pairs for which, for some distribution on
the channel input, each receiver's ID rate does not exceed the mutual
information between the channel input and the channel output that it observes.
Moreover, the capacity region's interior is achieved by codes with
deterministic encoders. The results are obtained under the average-error
criterion, which requires that each receiver reliably identify its message
whenever the message intended for the other receiver is drawn at random. They
hold also for channels whose transmission capacity region is to-date unknown.
Key to the proof is a new ID code construction for the single-user channel.
Extensions to the BC with one-sided feedback and the three-receiver BC are also
discussed: inner bounds on their ID capacity regions are obtained, and those
are shown to be in some cases tight.
|
[
{
"created": "Sun, 27 Mar 2016 19:01:31 GMT",
"version": "v1"
},
{
"created": "Wed, 31 May 2017 20:49:01 GMT",
"version": "v2"
}
] |
2017-06-02
|
[
[
"Bracher",
"Annina",
""
],
[
"Lapidoth",
"Amos",
""
]
] |
The identification (ID) capacity region of the two-receiver broadcast channel (BC) is shown to be the set of rate-pairs for which, for some distribution on the channel input, each receiver's ID rate does not exceed the mutual information between the channel input and the channel output that it observes. Moreover, the capacity region's interior is achieved by codes with deterministic encoders. The results are obtained under the average-error criterion, which requires that each receiver reliably identify its message whenever the message intended for the other receiver is drawn at random. They hold also for channels whose transmission capacity region is to-date unknown. Key to the proof is a new ID code construction for the single-user channel. Extensions to the BC with one-sided feedback and the three-receiver BC are also discussed: inner bounds on their ID capacity regions are obtained, and those are shown to be in some cases tight.
|
2407.20194
|
Maximum Wilder-Smith
|
Maximum Wilder-Smith, Vaishakh Patil, Marco Hutter
|
Radiance Fields for Robotic Teleoperation
|
8 pages, 10 figures, Accepted to IROS 2024
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Radiance field methods such as Neural Radiance Fields (NeRFs) or 3D Gaussian
Splatting (3DGS), have revolutionized graphics and novel view synthesis. Their
ability to synthesize new viewpoints with photo-realistic quality, as well as
capture complex volumetric and specular scenes, makes them an ideal
visualization for robotic teleoperation setups. Direct camera teleoperation
provides high-fidelity operation at the cost of maneuverability, while
reconstruction-based approaches offer controllable scenes with lower fidelity.
With this in mind, we propose replacing the traditional
reconstruction-visualization components of the robotic teleoperation pipeline
with online Radiance Fields, offering highly maneuverable scenes with
photorealistic quality. As such, there are three main contributions to state of
the art: (1) online training of Radiance Fields using live data from multiple
cameras, (2) support for a variety of radiance methods including NeRF and 3DGS,
(3) visualization suite for these methods including a virtual reality scene. To
enable seamless integration with existing setups, these components were tested
with multiple robots in multiple configurations and were displayed using
traditional tools as well as the VR headset. The results across methods and
robots were compared quantitatively to a baseline of mesh reconstruction, and a
user study was conducted to compare the different visualization methods. For
videos and code, check out https://leggedrobotics.github.io/rffr.github.io/.
|
[
{
"created": "Mon, 29 Jul 2024 17:20:55 GMT",
"version": "v1"
}
] |
2024-07-30
|
[
[
"Wilder-Smith",
"Maximum",
""
],
[
"Patil",
"Vaishakh",
""
],
[
"Hutter",
"Marco",
""
]
] |
Radiance field methods such as Neural Radiance Fields (NeRFs) or 3D Gaussian Splatting (3DGS), have revolutionized graphics and novel view synthesis. Their ability to synthesize new viewpoints with photo-realistic quality, as well as capture complex volumetric and specular scenes, makes them an ideal visualization for robotic teleoperation setups. Direct camera teleoperation provides high-fidelity operation at the cost of maneuverability, while reconstruction-based approaches offer controllable scenes with lower fidelity. With this in mind, we propose replacing the traditional reconstruction-visualization components of the robotic teleoperation pipeline with online Radiance Fields, offering highly maneuverable scenes with photorealistic quality. As such, there are three main contributions to state of the art: (1) online training of Radiance Fields using live data from multiple cameras, (2) support for a variety of radiance methods including NeRF and 3DGS, (3) visualization suite for these methods including a virtual reality scene. To enable seamless integration with existing setups, these components were tested with multiple robots in multiple configurations and were displayed using traditional tools as well as the VR headset. The results across methods and robots were compared quantitatively to a baseline of mesh reconstruction, and a user study was conducted to compare the different visualization methods. For videos and code, check out https://leggedrobotics.github.io/rffr.github.io/.
|
1904.08078
|
Seunghoon Lee
|
Jeremiah Blocki, Seunghoon Lee, Samson Zhou
|
Approximating Cumulative Pebbling Cost is Unique Games Hard
|
28 pages, updated figures and corrected typos
| null | null | null |
cs.CC cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The cumulative pebbling complexity of a directed acyclic graph $G$ is defined
as $\mathsf{cc}(G) = \min_P \sum_i |P_i|$, where the minimum is taken over all
legal (parallel) black pebblings of $G$ and $|P_i|$ denotes the number of
pebbles on the graph during round $i$. Intuitively, $\mathsf{cc}(G)$ captures
the amortized Space-Time complexity of pebbling $m$ copies of $G$ in parallel.
The cumulative pebbling complexity of a graph $G$ is of particular interest in
the field of cryptography as $\mathsf{cc}(G)$ is tightly related to the
amortized Area-Time complexity of the Data-Independent Memory-Hard Function
(iMHF) $f_{G,H}$ [AS15] defined using a constant indegree directed acyclic
graph (DAG) $G$ and a random oracle $H(\cdot)$. A secure iMHF should have
amortized Space-Time complexity as high as possible, e.g., to deter brute-force
password attacker who wants to find $x$ such that $f_{G,H}(x) = h$. Thus, to
analyze the (in)security of a candidate iMHF $f_{G,H}$, it is crucial to
estimate the value $\mathsf{cc}(G)$ but currently, upper and lower bounds for
leading iMHF candidates differ by several orders of magnitude. Blocki and Zhou
recently showed that it is $\mathsf{NP}$-Hard to compute $\mathsf{cc}(G)$, but
their techniques do not even rule out an efficient
$(1+\varepsilon)$-approximation algorithm for any constant $\varepsilon>0$. We
show that for any constant $c > 0$, it is Unique Games hard to approximate
$\mathsf{cc}(G)$ to within a factor of $c$.
(See the paper for the full abstract.)
|
[
{
"created": "Wed, 17 Apr 2019 04:36:53 GMT",
"version": "v1"
},
{
"created": "Wed, 28 Aug 2019 21:52:46 GMT",
"version": "v2"
},
{
"created": "Fri, 15 Nov 2019 23:06:04 GMT",
"version": "v3"
}
] |
2019-11-19
|
[
[
"Blocki",
"Jeremiah",
""
],
[
"Lee",
"Seunghoon",
""
],
[
"Zhou",
"Samson",
""
]
] |
The cumulative pebbling complexity of a directed acyclic graph $G$ is defined as $\mathsf{cc}(G) = \min_P \sum_i |P_i|$, where the minimum is taken over all legal (parallel) black pebblings of $G$ and $|P_i|$ denotes the number of pebbles on the graph during round $i$. Intuitively, $\mathsf{cc}(G)$ captures the amortized Space-Time complexity of pebbling $m$ copies of $G$ in parallel. The cumulative pebbling complexity of a graph $G$ is of particular interest in the field of cryptography as $\mathsf{cc}(G)$ is tightly related to the amortized Area-Time complexity of the Data-Independent Memory-Hard Function (iMHF) $f_{G,H}$ [AS15] defined using a constant indegree directed acyclic graph (DAG) $G$ and a random oracle $H(\cdot)$. A secure iMHF should have amortized Space-Time complexity as high as possible, e.g., to deter brute-force password attacker who wants to find $x$ such that $f_{G,H}(x) = h$. Thus, to analyze the (in)security of a candidate iMHF $f_{G,H}$, it is crucial to estimate the value $\mathsf{cc}(G)$ but currently, upper and lower bounds for leading iMHF candidates differ by several orders of magnitude. Blocki and Zhou recently showed that it is $\mathsf{NP}$-Hard to compute $\mathsf{cc}(G)$, but their techniques do not even rule out an efficient $(1+\varepsilon)$-approximation algorithm for any constant $\varepsilon>0$. We show that for any constant $c > 0$, it is Unique Games hard to approximate $\mathsf{cc}(G)$ to within a factor of $c$. (See the paper for the full abstract.)
|
2407.12623
|
Andrew Jeffery
|
Andrew Jeffery, Julien Maffre, Heidi Howard, Richard Mortier
|
LSKV: A Confidential Distributed Datastore to Protect Critical Data in
the Cloud
| null | null | null | null |
cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
Software services are increasingly migrating to the cloud, requiring trust in
actors with direct access to the hardware, software and data comprising the
service. A distributed datastore storing critical data sits at the core of many
services; a prime example being etcd in Kubernetes. Trusted execution
environments can secure this data from cloud providers during execution, but it
is complex to build trustworthy data storage systems using such mechanisms. We
present the design and evaluation of the Ledger-backed Secure Key-Value
datastore (LSKV), a distributed datastore that provides an etcd-like API but
can use trusted execution mechanisms to keep cloud providers outside the trust
boundary. LSKV provides a path to transition traditional systems towards
confidential execution, provides competitive performance compared to etcd, and
helps clients to gain trust in intermediary services. LSKV forms a foundational
core, lowering the barriers to building more trustworthy systems.
|
[
{
"created": "Wed, 17 Jul 2024 14:50:24 GMT",
"version": "v1"
}
] |
2024-07-18
|
[
[
"Jeffery",
"Andrew",
""
],
[
"Maffre",
"Julien",
""
],
[
"Howard",
"Heidi",
""
],
[
"Mortier",
"Richard",
""
]
] |
Software services are increasingly migrating to the cloud, requiring trust in actors with direct access to the hardware, software and data comprising the service. A distributed datastore storing critical data sits at the core of many services; a prime example being etcd in Kubernetes. Trusted execution environments can secure this data from cloud providers during execution, but it is complex to build trustworthy data storage systems using such mechanisms. We present the design and evaluation of the Ledger-backed Secure Key-Value datastore (LSKV), a distributed datastore that provides an etcd-like API but can use trusted execution mechanisms to keep cloud providers outside the trust boundary. LSKV provides a path to transition traditional systems towards confidential execution, provides competitive performance compared to etcd, and helps clients to gain trust in intermediary services. LSKV forms a foundational core, lowering the barriers to building more trustworthy systems.
|
2112.10969
|
Fanqing Lin
|
Fanqing Lin, Brian Price, Tony Martinez
|
Generalizing Interactive Backpropagating Refinement for Dense Prediction
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As deep neural networks become the state-of-the-art approach in the field of
computer vision for dense prediction tasks, many methods have been developed
for automatic estimation of the target outputs given the visual inputs.
Although the estimation accuracy of the proposed automatic methods continues to
improve, interactive refinement is oftentimes necessary for further correction.
Recently, feature backpropagating refinement scheme (f-BRS) has been proposed
for the task of interactive segmentation, which enables efficient optimization
of a small set of auxiliary variables inserted into the pretrained network to
produce object segmentation that better aligns with user inputs. However, the
proposed auxiliary variables only contain channel-wise scale and bias, limiting
the optimization to global refinement only. In this work, in order to
generalize backpropagating refinement for a wide range of dense prediction
tasks, we introduce a set of G-BRS (Generalized Backpropagating Refinement
Scheme) layers that enable both global and localized refinement for the
following tasks: interactive segmentation, semantic segmentation, image matting
and monocular depth estimation. Experiments on SBD, Cityscapes, Mapillary
Vista, Composition-1k and NYU-Depth-V2 show that our method can successfully
generalize and significantly improve performance of existing pretrained
state-of-the-art models with only a few clicks.
|
[
{
"created": "Tue, 21 Dec 2021 03:52:08 GMT",
"version": "v1"
},
{
"created": "Wed, 22 Dec 2021 11:07:46 GMT",
"version": "v2"
}
] |
2021-12-23
|
[
[
"Lin",
"Fanqing",
""
],
[
"Price",
"Brian",
""
],
[
"Martinez",
"Tony",
""
]
] |
As deep neural networks become the state-of-the-art approach in the field of computer vision for dense prediction tasks, many methods have been developed for automatic estimation of the target outputs given the visual inputs. Although the estimation accuracy of the proposed automatic methods continues to improve, interactive refinement is oftentimes necessary for further correction. Recently, feature backpropagating refinement scheme (f-BRS) has been proposed for the task of interactive segmentation, which enables efficient optimization of a small set of auxiliary variables inserted into the pretrained network to produce object segmentation that better aligns with user inputs. However, the proposed auxiliary variables only contain channel-wise scale and bias, limiting the optimization to global refinement only. In this work, in order to generalize backpropagating refinement for a wide range of dense prediction tasks, we introduce a set of G-BRS (Generalized Backpropagating Refinement Scheme) layers that enable both global and localized refinement for the following tasks: interactive segmentation, semantic segmentation, image matting and monocular depth estimation. Experiments on SBD, Cityscapes, Mapillary Vista, Composition-1k and NYU-Depth-V2 show that our method can successfully generalize and significantly improve performance of existing pretrained state-of-the-art models with only a few clicks.
|
2406.12553
|
Michael Dorner
|
Michael Dorner and Daniel Mendez and Ehsan Zabardast and Nicole Valdez
and Marcin Floryan
|
Measuring Information Diffusion in Code Review at Spotify
| null | null | null | null |
cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
Background: As a core practice in software engineering, the nature of code
review has been frequently subject to research. Prior exploratory studies found
that code review, the discussion around a code change among humans, forms a
communication network that enables its participants to exchange and spread
information. Although popular in software engineering, there is no confirmatory
research corroborating this theory and the actual extent of information
diffusion in code review is not well understood.
Objective: In this registered report, we propose an observational study to
measure information diffusion in code review to test the theory of code review
as communication network.
Method: We approximate the information diffusion in code review through the
frequency and the similarity between (1) human participants, (2) affected
components, and (3) involved teams of linked code reviews. The measurements
approximating the information diffusion in code review serve as a foundation
for falsifying the theory of code review as communication network.
|
[
{
"created": "Tue, 18 Jun 2024 12:29:09 GMT",
"version": "v1"
}
] |
2024-06-19
|
[
[
"Dorner",
"Michael",
""
],
[
"Mendez",
"Daniel",
""
],
[
"Zabardast",
"Ehsan",
""
],
[
"Valdez",
"Nicole",
""
],
[
"Floryan",
"Marcin",
""
]
] |
Background: As a core practice in software engineering, the nature of code review has been frequently subject to research. Prior exploratory studies found that code review, the discussion around a code change among humans, forms a communication network that enables its participants to exchange and spread information. Although popular in software engineering, there is no confirmatory research corroborating this theory and the actual extent of information diffusion in code review is not well understood. Objective: In this registered report, we propose an observational study to measure information diffusion in code review to test the theory of code review as communication network. Method: We approximate the information diffusion in code review through the frequency and the similarity between (1) human participants, (2) affected components, and (3) involved teams of linked code reviews. The measurements approximating the information diffusion in code review serve as a foundation for falsifying the theory of code review as communication network.
|
2310.12558
|
Chenglei Si
|
Chenglei Si, Navita Goyal, Sherry Tongshuang Wu, Chen Zhao, Shi Feng,
Hal Daum\'e III, Jordan Boyd-Graber
|
Large Language Models Help Humans Verify Truthfulness -- Except When
They Are Convincingly Wrong
|
NAACL 2024
| null | null | null |
cs.CL cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
Large Language Models (LLMs) are increasingly used for accessing information
on the web. Their truthfulness and factuality are thus of great interest. To
help users make the right decisions about the information they get, LLMs should
not only provide information but also help users fact-check it. Our experiments
with 80 crowdworkers compare language models with search engines (information
retrieval systems) at facilitating fact-checking. We prompt LLMs to validate a
given claim and provide corresponding explanations. Users reading LLM
explanations are significantly more efficient than those using search engines
while achieving similar accuracy. However, they over-rely on the LLMs when the
explanation is wrong. To reduce over-reliance on LLMs, we ask LLMs to provide
contrastive information - explain both why the claim is true and false, and
then we present both sides of the explanation to users. This contrastive
explanation mitigates users' over-reliance on LLMs, but cannot significantly
outperform search engines. Further, showing both search engine results and LLM
explanations offers no complementary benefits compared to search engines alone.
Taken together, our study highlights that natural language explanations by LLMs
may not be a reliable replacement for reading the retrieved passages,
especially in high-stakes settings where over-relying on wrong AI explanations
could lead to critical consequences.
|
[
{
"created": "Thu, 19 Oct 2023 08:09:58 GMT",
"version": "v1"
},
{
"created": "Mon, 1 Apr 2024 21:55:06 GMT",
"version": "v2"
}
] |
2024-04-03
|
[
[
"Si",
"Chenglei",
""
],
[
"Goyal",
"Navita",
""
],
[
"Wu",
"Sherry Tongshuang",
""
],
[
"Zhao",
"Chen",
""
],
[
"Feng",
"Shi",
""
],
[
"Daumé",
"Hal",
"III"
],
[
"Boyd-Graber",
"Jordan",
""
]
] |
Large Language Models (LLMs) are increasingly used for accessing information on the web. Their truthfulness and factuality are thus of great interest. To help users make the right decisions about the information they get, LLMs should not only provide information but also help users fact-check it. Our experiments with 80 crowdworkers compare language models with search engines (information retrieval systems) at facilitating fact-checking. We prompt LLMs to validate a given claim and provide corresponding explanations. Users reading LLM explanations are significantly more efficient than those using search engines while achieving similar accuracy. However, they over-rely on the LLMs when the explanation is wrong. To reduce over-reliance on LLMs, we ask LLMs to provide contrastive information - explain both why the claim is true and false, and then we present both sides of the explanation to users. This contrastive explanation mitigates users' over-reliance on LLMs, but cannot significantly outperform search engines. Further, showing both search engine results and LLM explanations offers no complementary benefits compared to search engines alone. Taken together, our study highlights that natural language explanations by LLMs may not be a reliable replacement for reading the retrieved passages, especially in high-stakes settings where over-relying on wrong AI explanations could lead to critical consequences.
|
1403.1381
|
Daniel Zaragoza
|
Daniel Zaragoza
|
The TCP-modified Engset Model Revisited
|
arXiv admin note: substantial text overlap with arXiv:1401.8173
| null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We revisit the TCP-modified Engset model proposed by Heyman et al. in [1].
The model deals with the superposition of a limited number of TCP connections
alternating between file transmission and silence in a web-like fashion. We
consider homogeneous sources only. (a) We take into account the effects of slow
start and limited receiver window as well as small average file sizes. (b) We
propose an alternative way for calculating the average connection rate in the
superposition. (c) From the model we propose a way for calculating the queuing
behavior; i.e., the overflow probability. (d) From this last point, we propose
a new link buffer sizing rule. Comparison with extensive simulations shows that
the average rate and duration, as well as, link utilization are accurately
predicted for exponentially distributed file sizes. For longer tail
distributions, the model remains accurate provided the receiver window is
adjusted appropriately. The accuracy increases with increasing load. As
concerns the queuing behavior, the same observation applies. Finally, the
revisited model cannot be used to predict losses larger than about 1%. The
model overestimates loss rates above that threshold.
|
[
{
"created": "Thu, 6 Mar 2014 09:27:47 GMT",
"version": "v1"
}
] |
2014-03-07
|
[
[
"Zaragoza",
"Daniel",
""
]
] |
We revisit the TCP-modified Engset model proposed by Heyman et al. in [1]. The model deals with the superposition of a limited number of TCP connections alternating between file transmission and silence in a web-like fashion. We consider homogeneous sources only. (a) We take into account the effects of slow start and limited receiver window as well as small average file sizes. (b) We propose an alternative way for calculating the average connection rate in the superposition. (c) From the model we propose a way for calculating the queuing behavior; i.e., the overflow probability. (d) From this last point, we propose a new link buffer sizing rule. Comparison with extensive simulations shows that the average rate and duration, as well as, link utilization are accurately predicted for exponentially distributed file sizes. For longer tail distributions, the model remains accurate provided the receiver window is adjusted appropriately. The accuracy increases with increasing load. As concerns the queuing behavior, the same observation applies. Finally, the revisited model cannot be used to predict losses larger than about 1%. The model overestimates loss rates above that threshold.
|
2309.14868
|
Yang Zhao
|
Yuan Chen, Zhiliang Ma and Yang Zhao
|
Cross-Dataset-Robust Method for Blind Real-World Image Quality
Assessment
|
10 pages, 6 figures
| null | null | null |
cs.CV eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Although many effective models and real-world datasets have been presented
for blind image quality assessment (BIQA), recent BIQA models usually tend to
fit specific training set. Hence, it is still difficult to accurately and
robustly measure the visual quality of an arbitrary real-world image. In this
paper, a robust BIQA method, is designed based on three aspects, i.e., robust
training strategy, large-scale real-world dataset, and powerful backbone.
First, many individual models based on popular and state-of-the-art (SOTA)
Swin-Transformer (SwinT) are trained on different real-world BIQA datasets
respectively. Then, these biased SwinT-based models are jointly used to
generate pseudo-labels, which adopts the probability of relative quality of two
random images instead of fixed quality score. A large-scale real-world image
dataset with 1,000,000 image pairs and pseudo-labels is then proposed for
training the final cross-dataset-robust model. Experimental results on
cross-dataset tests show that the performance of the proposed method is even
better than some SOTA methods that are directly trained on these datasets, thus
verifying the robustness and generalization of our method.
|
[
{
"created": "Tue, 26 Sep 2023 11:57:12 GMT",
"version": "v1"
}
] |
2023-09-27
|
[
[
"Chen",
"Yuan",
""
],
[
"Ma",
"Zhiliang",
""
],
[
"Zhao",
"Yang",
""
]
] |
Although many effective models and real-world datasets have been presented for blind image quality assessment (BIQA), recent BIQA models usually tend to fit specific training set. Hence, it is still difficult to accurately and robustly measure the visual quality of an arbitrary real-world image. In this paper, a robust BIQA method, is designed based on three aspects, i.e., robust training strategy, large-scale real-world dataset, and powerful backbone. First, many individual models based on popular and state-of-the-art (SOTA) Swin-Transformer (SwinT) are trained on different real-world BIQA datasets respectively. Then, these biased SwinT-based models are jointly used to generate pseudo-labels, which adopts the probability of relative quality of two random images instead of fixed quality score. A large-scale real-world image dataset with 1,000,000 image pairs and pseudo-labels is then proposed for training the final cross-dataset-robust model. Experimental results on cross-dataset tests show that the performance of the proposed method is even better than some SOTA methods that are directly trained on these datasets, thus verifying the robustness and generalization of our method.
|
2009.14015
|
W. Spencer Smith Dr.
|
Spencer Smith and Jacques Carette
|
Long-term Productivity for Long-term Impact
|
9 pages, Collegeville Workshop on Scientific Software Whitepaper,
2020
| null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a new conceptual definition of 'productivity' for sustainably
developing research software. Existing definitions are flawed as they are
short-term biased, thus devaluing long-term impact, which we consider to be the
principal goal. Taking a long-term view of productivity helps fix that problem.
We view the outputs of the development process as knowledge and user
satisfaction. User satisfaction is used as a proxy for effective quality. The
explicit emphasis on all knowledge produced, rather than just the
operationalizable knowledge (code) implies that human-reusable knowledge, i.e.
documentation, should also be greatly valued when producing research software.
|
[
{
"created": "Tue, 29 Sep 2020 13:48:44 GMT",
"version": "v1"
}
] |
2020-09-30
|
[
[
"Smith",
"Spencer",
""
],
[
"Carette",
"Jacques",
""
]
] |
We present a new conceptual definition of 'productivity' for sustainably developing research software. Existing definitions are flawed as they are short-term biased, thus devaluing long-term impact, which we consider to be the principal goal. Taking a long-term view of productivity helps fix that problem. We view the outputs of the development process as knowledge and user satisfaction. User satisfaction is used as a proxy for effective quality. The explicit emphasis on all knowledge produced, rather than just the operationalizable knowledge (code) implies that human-reusable knowledge, i.e. documentation, should also be greatly valued when producing research software.
|
1711.06976
|
Lex Fridman
|
Lex Fridman, Daniel E. Brown, Michael Glazer, William Angell, Spencer
Dodd, Benedikt Jenik, Jack Terwilliger, Aleksandr Patsekin, Julia
Kindelsberger, Li Ding, Sean Seaman, Alea Mehler, Andrew Sipperley, Anthony
Pettinato, Bobbie Seppelt, Linda Angell, Bruce Mehler, Bryan Reimer
|
MIT Advanced Vehicle Technology Study: Large-Scale Naturalistic Driving
Study of Driver Behavior and Interaction with Automation
| null |
IEEE Access, vol. 7, pp. 102021-102038, 2019
|
10.1109/ACCESS.2019.2926040
| null |
cs.CY cs.CV cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
For the foreseeble future, human beings will likely remain an integral part
of the driving task, monitoring the AI system as it performs anywhere from just
over 0% to just under 100% of the driving. The governing objectives of the MIT
Autonomous Vehicle Technology (MIT-AVT) study are to (1) undertake large-scale
real-world driving data collection that includes high-definition video to fuel
the development of deep learning based internal and external perception
systems, (2) gain a holistic understanding of how human beings interact with
vehicle automation technology by integrating video data with vehicle state
data, driver characteristics, mental models, and self-reported experiences with
technology, and (3) identify how technology and other factors related to
automation adoption and use can be improved in ways that save lives. In
pursuing these objectives, we have instrumented 23 Tesla Model S and Model X
vehicles, 2 Volvo S90 vehicles, 2 Range Rover Evoque, and 2 Cadillac CT6
vehicles for both long-term (over a year per driver) and medium term (one month
per driver) naturalistic driving data collection. Furthermore, we are
continually developing new methods for analysis of the massive-scale dataset
collected from the instrumented vehicle fleet. The recorded data streams
include IMU, GPS, CAN messages, and high-definition video streams of the driver
face, the driver cabin, the forward roadway, and the instrument cluster (on
select vehicles). The study is on-going and growing. To date, we have 122
participants, 15,610 days of participation, 511,638 miles, and 7.1 billion
video frames. This paper presents the design of the study, the data collection
hardware, the processing of the data, and the computer vision algorithms
currently being used to extract actionable knowledge from the data.
|
[
{
"created": "Sun, 19 Nov 2017 06:46:21 GMT",
"version": "v1"
},
{
"created": "Sun, 30 Sep 2018 04:02:20 GMT",
"version": "v2"
},
{
"created": "Mon, 15 Apr 2019 01:13:57 GMT",
"version": "v3"
},
{
"created": "Wed, 14 Aug 2019 11:17:00 GMT",
"version": "v4"
}
] |
2019-08-15
|
[
[
"Fridman",
"Lex",
""
],
[
"Brown",
"Daniel E.",
""
],
[
"Glazer",
"Michael",
""
],
[
"Angell",
"William",
""
],
[
"Dodd",
"Spencer",
""
],
[
"Jenik",
"Benedikt",
""
],
[
"Terwilliger",
"Jack",
""
],
[
"Patsekin",
"Aleksandr",
""
],
[
"Kindelsberger",
"Julia",
""
],
[
"Ding",
"Li",
""
],
[
"Seaman",
"Sean",
""
],
[
"Mehler",
"Alea",
""
],
[
"Sipperley",
"Andrew",
""
],
[
"Pettinato",
"Anthony",
""
],
[
"Seppelt",
"Bobbie",
""
],
[
"Angell",
"Linda",
""
],
[
"Mehler",
"Bruce",
""
],
[
"Reimer",
"Bryan",
""
]
] |
For the foreseeble future, human beings will likely remain an integral part of the driving task, monitoring the AI system as it performs anywhere from just over 0% to just under 100% of the driving. The governing objectives of the MIT Autonomous Vehicle Technology (MIT-AVT) study are to (1) undertake large-scale real-world driving data collection that includes high-definition video to fuel the development of deep learning based internal and external perception systems, (2) gain a holistic understanding of how human beings interact with vehicle automation technology by integrating video data with vehicle state data, driver characteristics, mental models, and self-reported experiences with technology, and (3) identify how technology and other factors related to automation adoption and use can be improved in ways that save lives. In pursuing these objectives, we have instrumented 23 Tesla Model S and Model X vehicles, 2 Volvo S90 vehicles, 2 Range Rover Evoque, and 2 Cadillac CT6 vehicles for both long-term (over a year per driver) and medium term (one month per driver) naturalistic driving data collection. Furthermore, we are continually developing new methods for analysis of the massive-scale dataset collected from the instrumented vehicle fleet. The recorded data streams include IMU, GPS, CAN messages, and high-definition video streams of the driver face, the driver cabin, the forward roadway, and the instrument cluster (on select vehicles). The study is on-going and growing. To date, we have 122 participants, 15,610 days of participation, 511,638 miles, and 7.1 billion video frames. This paper presents the design of the study, the data collection hardware, the processing of the data, and the computer vision algorithms currently being used to extract actionable knowledge from the data.
|
2112.12411
|
Aur\'elien Bellet
|
Riad Ladjel, Nicolas Anciaux, Aur\'elien Bellet, Guillaume Scerri
|
Mitigating Leakage from Data Dependent Communications in Decentralized
Computing using Differential Privacy
| null | null | null | null |
cs.CR cs.DB cs.DC cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Imagine a group of citizens willing to collectively contribute their personal
data for the common good to produce socially useful information, resulting from
data analytics or machine learning computations. Sharing raw personal data with
a centralized server performing the computation could raise concerns about
privacy and a perceived risk of mass surveillance. Instead, citizens may trust
each other and their own devices to engage into a decentralized computation to
collaboratively produce an aggregate data release to be shared. In the context
of secure computing nodes exchanging messages over secure channels at runtime,
a key security issue is to protect against external attackers observing the
traffic, whose dependence on data may reveal personal information. Existing
solutions are designed for the cloud setting, with the goal of hiding all
properties of the underlying dataset, and do not address the specific privacy
and efficiency challenges that arise in the above context. In this paper, we
define a general execution model to control the data-dependence of
communications in user-side decentralized computations, in which differential
privacy guarantees for communication patterns in global execution plans can be
analyzed by combining guarantees obtained on local clusters of nodes. We
propose a set of algorithms which allow to trade-off between privacy, utility
and efficiency. Our formal privacy guarantees leverage and extend recent
results on privacy amplification by shuffling. We illustrate the usefulness of
our proposal on two representative examples of decentralized execution plans
with data-dependent communications.
|
[
{
"created": "Thu, 23 Dec 2021 08:30:17 GMT",
"version": "v1"
}
] |
2021-12-24
|
[
[
"Ladjel",
"Riad",
""
],
[
"Anciaux",
"Nicolas",
""
],
[
"Bellet",
"Aurélien",
""
],
[
"Scerri",
"Guillaume",
""
]
] |
Imagine a group of citizens willing to collectively contribute their personal data for the common good to produce socially useful information, resulting from data analytics or machine learning computations. Sharing raw personal data with a centralized server performing the computation could raise concerns about privacy and a perceived risk of mass surveillance. Instead, citizens may trust each other and their own devices to engage into a decentralized computation to collaboratively produce an aggregate data release to be shared. In the context of secure computing nodes exchanging messages over secure channels at runtime, a key security issue is to protect against external attackers observing the traffic, whose dependence on data may reveal personal information. Existing solutions are designed for the cloud setting, with the goal of hiding all properties of the underlying dataset, and do not address the specific privacy and efficiency challenges that arise in the above context. In this paper, we define a general execution model to control the data-dependence of communications in user-side decentralized computations, in which differential privacy guarantees for communication patterns in global execution plans can be analyzed by combining guarantees obtained on local clusters of nodes. We propose a set of algorithms which allow to trade-off between privacy, utility and efficiency. Our formal privacy guarantees leverage and extend recent results on privacy amplification by shuffling. We illustrate the usefulness of our proposal on two representative examples of decentralized execution plans with data-dependent communications.
|
2303.00815
|
Jingli Shi
|
Jingli Shi, Weihua Li, Quan Bai, Yi Yang, Jianhua Jiang
|
Soft Prompt Guided Joint Learning for Cross-Domain Sentiment Analysis
|
22 pages
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Aspect term extraction is a fundamental task in fine-grained sentiment
analysis, which aims at detecting customer's opinion targets from reviews on
product or service. The traditional supervised models can achieve promising
results with annotated datasets, however, the performance dramatically
decreases when they are applied to the task of cross-domain aspect term
extraction. Existing cross-domain transfer learning methods either directly
inject linguistic features into Language models, making it difficult to
transfer linguistic knowledge to target domain, or rely on the fixed predefined
prompts, which is time-consuming to construct the prompts over all potential
aspect term spans. To resolve the limitations, we propose a soft prompt-based
joint learning method for cross domain aspect term extraction in this paper.
Specifically, by incorporating external linguistic features, the proposed
method learn domain-invariant representations between source and target domains
via multiple objectives, which bridges the gap between domains with varied
distributions of aspect terms. Further, the proposed method interpolates a set
of transferable soft prompts consisted of multiple learnable vectors that are
beneficial to detect aspect terms in target domain. Extensive experiments are
conducted on the benchmark datasets and the experimental results demonstrate
the effectiveness of the proposed method for cross-domain aspect terms
extraction.
|
[
{
"created": "Wed, 1 Mar 2023 20:33:37 GMT",
"version": "v1"
}
] |
2023-03-03
|
[
[
"Shi",
"Jingli",
""
],
[
"Li",
"Weihua",
""
],
[
"Bai",
"Quan",
""
],
[
"Yang",
"Yi",
""
],
[
"Jiang",
"Jianhua",
""
]
] |
Aspect term extraction is a fundamental task in fine-grained sentiment analysis, which aims at detecting customer's opinion targets from reviews on product or service. The traditional supervised models can achieve promising results with annotated datasets, however, the performance dramatically decreases when they are applied to the task of cross-domain aspect term extraction. Existing cross-domain transfer learning methods either directly inject linguistic features into Language models, making it difficult to transfer linguistic knowledge to target domain, or rely on the fixed predefined prompts, which is time-consuming to construct the prompts over all potential aspect term spans. To resolve the limitations, we propose a soft prompt-based joint learning method for cross domain aspect term extraction in this paper. Specifically, by incorporating external linguistic features, the proposed method learn domain-invariant representations between source and target domains via multiple objectives, which bridges the gap between domains with varied distributions of aspect terms. Further, the proposed method interpolates a set of transferable soft prompts consisted of multiple learnable vectors that are beneficial to detect aspect terms in target domain. Extensive experiments are conducted on the benchmark datasets and the experimental results demonstrate the effectiveness of the proposed method for cross-domain aspect terms extraction.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.