id stringlengths 9 10 | submitter stringlengths 1 64 ⌀ | authors stringlengths 4 20.7k | title stringlengths 4 246 | comments stringlengths 1 523 ⌀ | journal-ref stringlengths 4 404 ⌀ | doi stringlengths 11 153 ⌀ | report-no stringlengths 2 254 ⌀ | categories stringlengths 5 98 | license stringclasses 9 values | orig_abstract stringlengths 14 3.35k | versions listlengths 1 60 | update_date stringlengths 10 10 | authors_parsed listlengths 1 1.35k | abstract stringlengths 11 3.34k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1312.7444 | Narayan Chakraborty | Mohammad Jabed Morshed Chowdhury and Narayan Ranjan Chakraborty | CAPTCHA Based on Human Cognitive Factor | International Journal of Advanced Computer Science and Applications,
2013 | null | null | null | cs.HC cs.CY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A CAPTCHA (Completely Automated Public Turing test to tell Computers and
Humans Apart) is an automatic security mechanism used to determine whether the
user is a human or a malicious computer program. It is a program that generates
and grades tests that are human solvable, but intends to be beyond the
capabilities of current computer programs. CAPTCHA should be designed to be
very easy for humans but very hard for machines. Unfortunately, the existing
CAPTCHA systems while trying to maximize the difficulty for automated programs
to pass tests by increasing distortion or noise have consequently, made it also
very difficult for potential users. To address the issue, this paper addresses
an alternative form of CAPTCHA that provides a variety of questions from
mathematical, logical and general problems which only human can understand and
answer correctly in a given time. The proposed framework supports diversity in
choosing the questions to be answered and a user-friendly framework to the
users. A user-study is also conducted to judge the performance of the developed
system with different background. The study shows the efficacy of the
implemented system with a good level of user satisfaction over traditional
CAPTCHA available today.
| [
{
"created": "Sat, 28 Dec 2013 15:28:23 GMT",
"version": "v1"
}
] | 2013-12-31 | [
[
"Chowdhury",
"Mohammad Jabed Morshed",
""
],
[
"Chakraborty",
"Narayan Ranjan",
""
]
] | A CAPTCHA (Completely Automated Public Turing test to tell Computers and Humans Apart) is an automatic security mechanism used to determine whether the user is a human or a malicious computer program. It is a program that generates and grades tests that are human solvable, but intends to be beyond the capabilities of current computer programs. CAPTCHA should be designed to be very easy for humans but very hard for machines. Unfortunately, the existing CAPTCHA systems while trying to maximize the difficulty for automated programs to pass tests by increasing distortion or noise have consequently, made it also very difficult for potential users. To address the issue, this paper addresses an alternative form of CAPTCHA that provides a variety of questions from mathematical, logical and general problems which only human can understand and answer correctly in a given time. The proposed framework supports diversity in choosing the questions to be answered and a user-friendly framework to the users. A user-study is also conducted to judge the performance of the developed system with different background. The study shows the efficacy of the implemented system with a good level of user satisfaction over traditional CAPTCHA available today. |
2308.01262 | Michael Gableman | Michael Gableman and Avinash Kak | Incorporating Season and Solar Specificity into Renderings made by a
NeRF Architecture using Satellite Images | 18 pages, 17 figures, 10 tables | null | null | null | cs.CV eess.IV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As a result of Shadow NeRF and Sat-NeRF, it is possible to take the solar
angle into account in a NeRF-based framework for rendering a scene from a novel
viewpoint using satellite images for training. Our work extends those
contributions and shows how one can make the renderings season-specific. Our
main challenge was creating a Neural Radiance Field (NeRF) that could render
seasonal features independently of viewing angle and solar angle while still
being able to render shadows. We teach our network to render seasonal features
by introducing one more input variable -- time of the year. However, the small
training datasets typical of satellite imagery can introduce ambiguities in
cases where shadows are present in the same location for every image of a
particular season. We add additional terms to the loss function to discourage
the network from using seasonal features for accounting for shadows. We show
the performance of our network on eight Areas of Interest containing images
captured by the Maxar WorldView-3 satellite. This evaluation includes tests
measuring the ability of our framework to accurately render novel views,
generate height maps, predict shadows, and specify seasonal features
independently from shadows. Our ablation studies justify the choices made for
network design parameters.
| [
{
"created": "Wed, 2 Aug 2023 16:30:18 GMT",
"version": "v1"
},
{
"created": "Fri, 15 Dec 2023 16:33:17 GMT",
"version": "v2"
}
] | 2023-12-18 | [
[
"Gableman",
"Michael",
""
],
[
"Kak",
"Avinash",
""
]
] | As a result of Shadow NeRF and Sat-NeRF, it is possible to take the solar angle into account in a NeRF-based framework for rendering a scene from a novel viewpoint using satellite images for training. Our work extends those contributions and shows how one can make the renderings season-specific. Our main challenge was creating a Neural Radiance Field (NeRF) that could render seasonal features independently of viewing angle and solar angle while still being able to render shadows. We teach our network to render seasonal features by introducing one more input variable -- time of the year. However, the small training datasets typical of satellite imagery can introduce ambiguities in cases where shadows are present in the same location for every image of a particular season. We add additional terms to the loss function to discourage the network from using seasonal features for accounting for shadows. We show the performance of our network on eight Areas of Interest containing images captured by the Maxar WorldView-3 satellite. This evaluation includes tests measuring the ability of our framework to accurately render novel views, generate height maps, predict shadows, and specify seasonal features independently from shadows. Our ablation studies justify the choices made for network design parameters. |
2104.08817 | Javier Iranzo-S\'anchez | Javier Iranzo-S\'anchez and Jorge Civera and Alfons Juan | Stream-level Latency Evaluation for Simultaneous Machine Translation | EMNLP 2021 Camera Ready | null | null | null | cs.CL | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Simultaneous machine translation has recently gained traction thanks to
significant quality improvements and the advent of streaming applications.
Simultaneous translation systems need to find a trade-off between translation
quality and response time, and with this purpose multiple latency measures have
been proposed. However, latency evaluations for simultaneous translation are
estimated at the sentence level, not taking into account the sequential nature
of a streaming scenario. Indeed, these sentence-level latency measures are not
well suited for continuous stream translation resulting in figures that are not
coherent with the simultaneous translation policy of the system being assessed.
This work proposes a stream-level adaptation of the current latency measures
based on a re-segmentation approach applied to the output translation, that is
successfully evaluated on streaming conditions for a reference IWSLT task.
| [
{
"created": "Sun, 18 Apr 2021 11:16:17 GMT",
"version": "v1"
},
{
"created": "Wed, 8 Sep 2021 11:16:15 GMT",
"version": "v2"
}
] | 2021-09-09 | [
[
"Iranzo-Sánchez",
"Javier",
""
],
[
"Civera",
"Jorge",
""
],
[
"Juan",
"Alfons",
""
]
] | Simultaneous machine translation has recently gained traction thanks to significant quality improvements and the advent of streaming applications. Simultaneous translation systems need to find a trade-off between translation quality and response time, and with this purpose multiple latency measures have been proposed. However, latency evaluations for simultaneous translation are estimated at the sentence level, not taking into account the sequential nature of a streaming scenario. Indeed, these sentence-level latency measures are not well suited for continuous stream translation resulting in figures that are not coherent with the simultaneous translation policy of the system being assessed. This work proposes a stream-level adaptation of the current latency measures based on a re-segmentation approach applied to the output translation, that is successfully evaluated on streaming conditions for a reference IWSLT task. |
2101.01298 | Pattaraporn Sangaroonsilp | Pattaraporn Sangaroonsilp, Hoa Khanh Dam, Morakot Choetkiertikul,
Chaiyong Ragkhitwetsagul, Aditya Ghose | A Taxonomy for Mining and Classifying Privacy Requirements in Issue
Reports | Accepted at Journal of Information and Software Technology | null | null | null | cs.SE | http://creativecommons.org/licenses/by/4.0/ | Context: Digital and physical trails of user activities are collected over
the use of software applications and systems. As software becomes ubiquitous,
protecting user privacy has become challenging. With the increase of user
privacy awareness and advent of privacy regulations and policies, there is an
emerging need to implement software systems that enhance the protection of
personal data processing. However, existing data protection and privacy
regulations provide key principles in high-level, making it difficult for
software engineers to design and implement privacy-aware systems. Objective: In
this paper, we develop a taxonomy that provides a comprehensive set of privacy
requirements based on four well-established personal data protection
regulations and privacy frameworks, the General Data Protection Regulation
(GDPR), ISO/IEC 29100, Thailand Personal Data Protection Act (Thailand PDPA)
and Asia-Pacific Economic Cooperation (APEC) privacy framework. Methods: These
requirements are extracted, refined and classified (using the goal-based
requirements analysis method) into a level that can be used to map with issue
reports. We have also performed a study on how two large open-source software
projects (Google Chrome and Moodle) address the privacy requirements in our
taxonomy through mining their issue reports. Results: The paper discusses how
the collected issues were classified, and presents the findings and insights
generated from our study. Conclusion: Mining and classifying privacy
requirements in issue reports can help organisations be aware of their state of
compliance by identifying privacy requirements that have not been addressed in
their software projects. The taxonomy can also trace back to regulations,
standards and frameworks that the software projects have not complied with
based on the identified privacy requirements.
| [
{
"created": "Tue, 5 Jan 2021 00:31:19 GMT",
"version": "v1"
},
{
"created": "Mon, 6 Feb 2023 01:19:10 GMT",
"version": "v2"
}
] | 2023-02-07 | [
[
"Sangaroonsilp",
"Pattaraporn",
""
],
[
"Dam",
"Hoa Khanh",
""
],
[
"Choetkiertikul",
"Morakot",
""
],
[
"Ragkhitwetsagul",
"Chaiyong",
""
],
[
"Ghose",
"Aditya",
""
]
] | Context: Digital and physical trails of user activities are collected over the use of software applications and systems. As software becomes ubiquitous, protecting user privacy has become challenging. With the increase of user privacy awareness and advent of privacy regulations and policies, there is an emerging need to implement software systems that enhance the protection of personal data processing. However, existing data protection and privacy regulations provide key principles in high-level, making it difficult for software engineers to design and implement privacy-aware systems. Objective: In this paper, we develop a taxonomy that provides a comprehensive set of privacy requirements based on four well-established personal data protection regulations and privacy frameworks, the General Data Protection Regulation (GDPR), ISO/IEC 29100, Thailand Personal Data Protection Act (Thailand PDPA) and Asia-Pacific Economic Cooperation (APEC) privacy framework. Methods: These requirements are extracted, refined and classified (using the goal-based requirements analysis method) into a level that can be used to map with issue reports. We have also performed a study on how two large open-source software projects (Google Chrome and Moodle) address the privacy requirements in our taxonomy through mining their issue reports. Results: The paper discusses how the collected issues were classified, and presents the findings and insights generated from our study. Conclusion: Mining and classifying privacy requirements in issue reports can help organisations be aware of their state of compliance by identifying privacy requirements that have not been addressed in their software projects. The taxonomy can also trace back to regulations, standards and frameworks that the software projects have not complied with based on the identified privacy requirements. |
1902.08726 | Zheng Yang | Zheng Yang, Hang Lei, Weizhong Qian | A Hybrid Formal Verification System in Coq for Ensuring the Reliability
and Security of Ethereum-based Service Smart Contracts | 29 pages, 28 figures, 5 tables | IEEE ACCESS (2020) | null | null | cs.PL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper reports on the development of a formal symbolic process virtual
machine (FSPVM) denoted as FSPVM-E for verifying the reliability and security
of Ethereum-based services at the source code level of smart contracts, and a
Coq proof assistant is employed for both programming the system and for proving
its correctness. The current version of FSPVM-E adopts execution-verification
isomorphism, which is an application extension of Curry-Howard isomorphism, as
its fundamental theoretical framework to combine symbolic execution and
higher-order logic theorem proving. The four primary components of FSPVM-E
include a general, extensible, and reusable formal memory framework, an
extensible and universal formal intermediate programming language denoted as
Lolisa, which is a large subset of the Solidity programming language using
generalized algebraic datatypes, the corresponding formally verified
interpreter of Lolisa, denoted as FEther, and assistant tools and libraries.
The self-correctness of all components is certified in Coq. Currently, FSPVM-E
supports the ERC20 token standard, and can automatically and symbolically
execute Ethereum-based smart contracts, scan their standard vulnerabilities,
and verify their reliability and security properties with Hoare-style logic in
Coq. To the best of authors' knowledge, the present work represents the first
hybrid formal verification system implemented in Coq for Ethereum smart
contracts that is applied at the Solidity source code level.
| [
{
"created": "Sat, 23 Feb 2019 03:32:06 GMT",
"version": "v1"
},
{
"created": "Sat, 9 Mar 2019 04:49:24 GMT",
"version": "v2"
},
{
"created": "Thu, 23 Jan 2020 04:35:03 GMT",
"version": "v3"
}
] | 2020-01-24 | [
[
"Yang",
"Zheng",
""
],
[
"Lei",
"Hang",
""
],
[
"Qian",
"Weizhong",
""
]
] | This paper reports on the development of a formal symbolic process virtual machine (FSPVM) denoted as FSPVM-E for verifying the reliability and security of Ethereum-based services at the source code level of smart contracts, and a Coq proof assistant is employed for both programming the system and for proving its correctness. The current version of FSPVM-E adopts execution-verification isomorphism, which is an application extension of Curry-Howard isomorphism, as its fundamental theoretical framework to combine symbolic execution and higher-order logic theorem proving. The four primary components of FSPVM-E include a general, extensible, and reusable formal memory framework, an extensible and universal formal intermediate programming language denoted as Lolisa, which is a large subset of the Solidity programming language using generalized algebraic datatypes, the corresponding formally verified interpreter of Lolisa, denoted as FEther, and assistant tools and libraries. The self-correctness of all components is certified in Coq. Currently, FSPVM-E supports the ERC20 token standard, and can automatically and symbolically execute Ethereum-based smart contracts, scan their standard vulnerabilities, and verify their reliability and security properties with Hoare-style logic in Coq. To the best of authors' knowledge, the present work represents the first hybrid formal verification system implemented in Coq for Ethereum smart contracts that is applied at the Solidity source code level. |
1505.02445 | Tomaso Aste | Guido Previde Massara, T. Di Matteo, Tomaso Aste | Network Filtering for Big Data: Triangulated Maximally Filtered Graph | 16 pages, 7 Figures, 2 Tables | null | null | null | cs.DS cond-mat.stat-mech cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a network-filtering method, the Triangulated Maximally Filtered
Graph (TMFG), that provides an approximate solution to the Weighted Maximal
Planar Graph problem. The underlying idea of TMFG consists in building a
triangulation that maximizes a score function associated with the amount of
information retained by the network. TMFG uses as weights any arbitrary
similarity measure to arrange data into a meaningful network structure that can
be used for clustering, community detection and modeling. The method is fast,
adaptable and scalable to very large datasets, it allows online updating and
learning as new data can be inserted and deleted with combinations of local and
non-local moves. TMFG permits readjustments of the network in consequence of
changes in the strength of the similarity measure. The method is based on local
topological moves and can therefore take advantage of parallel and GPUs
computing. We discuss how this network-filtering method can be used intuitively
and efficiently for big data studies and its significance from an
information-theoretic perspective.
| [
{
"created": "Sun, 10 May 2015 21:47:38 GMT",
"version": "v1"
},
{
"created": "Tue, 25 Aug 2015 16:02:37 GMT",
"version": "v2"
}
] | 2015-08-26 | [
[
"Massara",
"Guido Previde",
""
],
[
"Di Matteo",
"T.",
""
],
[
"Aste",
"Tomaso",
""
]
] | We propose a network-filtering method, the Triangulated Maximally Filtered Graph (TMFG), that provides an approximate solution to the Weighted Maximal Planar Graph problem. The underlying idea of TMFG consists in building a triangulation that maximizes a score function associated with the amount of information retained by the network. TMFG uses as weights any arbitrary similarity measure to arrange data into a meaningful network structure that can be used for clustering, community detection and modeling. The method is fast, adaptable and scalable to very large datasets, it allows online updating and learning as new data can be inserted and deleted with combinations of local and non-local moves. TMFG permits readjustments of the network in consequence of changes in the strength of the similarity measure. The method is based on local topological moves and can therefore take advantage of parallel and GPUs computing. We discuss how this network-filtering method can be used intuitively and efficiently for big data studies and its significance from an information-theoretic perspective. |
2002.06851 | Diego Antognini | Diego Antognini, Boi Faltings | GameWikiSum: a Novel Large Multi-Document Summarization Dataset | 6 pages, 1 figure, 4 tables. Accepted at LREC 2020 | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Today's research progress in the field of multi-document summarization is
obstructed by the small number of available datasets. Since the acquisition of
reference summaries is costly, existing datasets contain only hundreds of
samples at most, resulting in heavy reliance on hand-crafted features or
necessitating additional, manually annotated data. The lack of large corpora
therefore hinders the development of sophisticated models. Additionally, most
publicly available multi-document summarization corpora are in the news domain,
and no analogous dataset exists in the video game domain. In this paper, we
propose GameWikiSum, a new domain-specific dataset for multi-document
summarization, which is one hundred times larger than commonly used datasets,
and in another domain than news. Input documents consist of long professional
video game reviews as well as references of their gameplay sections in
Wikipedia pages. We analyze the proposed dataset and show that both abstractive
and extractive models can be trained on it. We release GameWikiSum for further
research: https://github.com/Diego999/GameWikiSum.
| [
{
"created": "Mon, 17 Feb 2020 09:25:19 GMT",
"version": "v1"
}
] | 2020-02-18 | [
[
"Antognini",
"Diego",
""
],
[
"Faltings",
"Boi",
""
]
] | Today's research progress in the field of multi-document summarization is obstructed by the small number of available datasets. Since the acquisition of reference summaries is costly, existing datasets contain only hundreds of samples at most, resulting in heavy reliance on hand-crafted features or necessitating additional, manually annotated data. The lack of large corpora therefore hinders the development of sophisticated models. Additionally, most publicly available multi-document summarization corpora are in the news domain, and no analogous dataset exists in the video game domain. In this paper, we propose GameWikiSum, a new domain-specific dataset for multi-document summarization, which is one hundred times larger than commonly used datasets, and in another domain than news. Input documents consist of long professional video game reviews as well as references of their gameplay sections in Wikipedia pages. We analyze the proposed dataset and show that both abstractive and extractive models can be trained on it. We release GameWikiSum for further research: https://github.com/Diego999/GameWikiSum. |
2006.07914 | Akbar Siami Namin | Moitrayee Chatterjee and Prerit Datta and Faranak Abri and Akbar Siami
Namin and Keith S. Jones | Cloud as an Attack Platform | null | null | null | null | cs.CR cs.NI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present an exploratory study of responses from $75$ security professionals
and ethical hackers in order to understand how they abuse cloud platforms for
attack purposes. The participants were recruited at the Black Hat and DEF CON
conferences. We presented the participants' with various attack scenarios and
asked them to explain the steps they would have carried out for launching the
attack in each scenario. Participants' responses were studied to understand
attackers' mental models, which would improve our understanding of necessary
security controls and recommendations regarding precautionary actions to
circumvent the exploitation of clouds for malicious activities. We observed
that in 93.78% of the responses, participants are abusing cloud services to
establish their attack environment and launch attacks.
| [
{
"created": "Sun, 14 Jun 2020 14:32:45 GMT",
"version": "v1"
}
] | 2020-06-16 | [
[
"Chatterjee",
"Moitrayee",
""
],
[
"Datta",
"Prerit",
""
],
[
"Abri",
"Faranak",
""
],
[
"Namin",
"Akbar Siami",
""
],
[
"Jones",
"Keith S.",
""
]
] | We present an exploratory study of responses from $75$ security professionals and ethical hackers in order to understand how they abuse cloud platforms for attack purposes. The participants were recruited at the Black Hat and DEF CON conferences. We presented the participants' with various attack scenarios and asked them to explain the steps they would have carried out for launching the attack in each scenario. Participants' responses were studied to understand attackers' mental models, which would improve our understanding of necessary security controls and recommendations regarding precautionary actions to circumvent the exploitation of clouds for malicious activities. We observed that in 93.78% of the responses, participants are abusing cloud services to establish their attack environment and launch attacks. |
2402.10781 | Wonjae Shin | Rohit Singh, Aryan Kaushik, Wonjae Shin, Marco Di Renzo, Vincenzo
Sciancalepore, Doohwan Lee, Hirofumi Sasaki, Arman Shojaeifard, and Octavia
A. Dobre | Towards 6G Evolution: Three Enhancements, Three Innovations, and Three
Major Challenges | 8 pages, 4 figures, 1 table | null | null | null | cs.IT math.IT | http://creativecommons.org/licenses/by/4.0/ | Over the past few decades, wireless communication has witnessed remarkable
growth, experiencing several transformative changes. This article aims to
provide a comprehensive overview of wireless communication technologies, from
the foundations to the recent wireless advances. Specifically, we take a
neutral look at the state-of-the-art technologies for 5G and the ongoing
evolutions towards 6G, reviewing the recommendations of the International
Mobile Communication vision for 2030 (IMT-2030). We first highlight specific
features of IMT 2030, including three IMT-2020 extensions (URLLC+, eMBB+, and
mMTC+) and three new innovations (Ubiquitous connectivity and integrating the
new capabilities of sensing & AI with communication functionality). Then, we
delve into three major challenges in implementing 6G, along with global
standardization efforts. Besides, a proof of concept is provided by
demonstrating terahertz (THz) signal transmission using Orbital Angular
Momentum (OAM) multiplexing, which is one of the potential candidates for 6G
and beyond. To inspire further potential research, we conclude by identifying
research opportunities and future visions on IMT-2030 recommendations.
| [
{
"created": "Fri, 16 Feb 2024 16:04:32 GMT",
"version": "v1"
}
] | 2024-02-19 | [
[
"Singh",
"Rohit",
""
],
[
"Kaushik",
"Aryan",
""
],
[
"Shin",
"Wonjae",
""
],
[
"Di Renzo",
"Marco",
""
],
[
"Sciancalepore",
"Vincenzo",
""
],
[
"Lee",
"Doohwan",
""
],
[
"Sasaki",
"Hirofumi",
""
],
[
"Shojaeifard",
"Arman",
""
],
[
"Dobre",
"Octavia A.",
""
]
] | Over the past few decades, wireless communication has witnessed remarkable growth, experiencing several transformative changes. This article aims to provide a comprehensive overview of wireless communication technologies, from the foundations to the recent wireless advances. Specifically, we take a neutral look at the state-of-the-art technologies for 5G and the ongoing evolutions towards 6G, reviewing the recommendations of the International Mobile Communication vision for 2030 (IMT-2030). We first highlight specific features of IMT 2030, including three IMT-2020 extensions (URLLC+, eMBB+, and mMTC+) and three new innovations (Ubiquitous connectivity and integrating the new capabilities of sensing & AI with communication functionality). Then, we delve into three major challenges in implementing 6G, along with global standardization efforts. Besides, a proof of concept is provided by demonstrating terahertz (THz) signal transmission using Orbital Angular Momentum (OAM) multiplexing, which is one of the potential candidates for 6G and beyond. To inspire further potential research, we conclude by identifying research opportunities and future visions on IMT-2030 recommendations. |
2209.01835 | Huiyuan Lai | Huiyuan Lai and Malvina Nissim | Multi-Figurative Language Generation | Accepted to COLING 2022 | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Figurative language generation is the task of reformulating a given text in
the desired figure of speech while still being faithful to the original
context. We take the first step towards multi-figurative language modelling by
providing a benchmark for the automatic generation of five common figurative
forms in English. We train mFLAG employing a scheme for multi-figurative
language pre-training on top of BART, and a mechanism for injecting the target
figurative information into the encoder; this enables the generation of text
with the target figurative form from another figurative form without parallel
figurative-figurative sentence pairs. Our approach outperforms all strong
baselines. We also offer some qualitative analysis and reflections on the
relationship between the different figures of speech.
| [
{
"created": "Mon, 5 Sep 2022 08:48:09 GMT",
"version": "v1"
}
] | 2022-09-07 | [
[
"Lai",
"Huiyuan",
""
],
[
"Nissim",
"Malvina",
""
]
] | Figurative language generation is the task of reformulating a given text in the desired figure of speech while still being faithful to the original context. We take the first step towards multi-figurative language modelling by providing a benchmark for the automatic generation of five common figurative forms in English. We train mFLAG employing a scheme for multi-figurative language pre-training on top of BART, and a mechanism for injecting the target figurative information into the encoder; this enables the generation of text with the target figurative form from another figurative form without parallel figurative-figurative sentence pairs. Our approach outperforms all strong baselines. We also offer some qualitative analysis and reflections on the relationship between the different figures of speech. |
2408.04299 | Wei Li | Wan Li, Xinyun Zhong, Wei Li, Song Zhang, Moheng Rong, Yan Xi, Peng
Yuan, Zechen Wang, Xiaolei Jiang, Rongxi Yi, Hui Tang, Yang Chen, Chaohui
Tong, Zhan Wu, Feng Wang | Respiratory Subtraction for Pulmonary Microwave Ablation Evaluation | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Currently, lung cancer is a leading cause of global cancer mortality, often
necessitating minimally invasive interventions. Microwave ablation (MWA) is
extensively utilized for both primary and secondary lung tumors. Although
numerous clinical guidelines and standards for MWA have been established, the
clinical evaluation of ablation surgery remains challenging and requires
long-term patient follow-up for confirmation. In this paper, we propose a
method termed respiratory subtraction to evaluate lung tumor ablation therapy
performance based on pre- and post-operative image guidance. Initially,
preoperative images undergo coarse rigid registration to their corresponding
postoperative positions, followed by further non-rigid registration.
Subsequently, subtraction images are generated by subtracting the registered
preoperative images from the postoperative ones. Furthermore, to enhance the
clinical assessment of MWA treatment performance, we devise a quantitative
analysis metric to evaluate ablation efficacy by comparing differences between
tumor areas and treatment areas. To the best of our knowledge, this is the
pioneering work in the field to facilitate the assessment of MWA surgery
performance on pulmonary tumors. Extensive experiments involving 35 clinical
cases further validate the efficacy of the respiratory subtraction method. The
experimental results confirm the effectiveness of the respiratory subtraction
method and the proposed quantitative evaluation metric in assessing lung tumor
treatment.
| [
{
"created": "Thu, 8 Aug 2024 08:25:38 GMT",
"version": "v1"
}
] | 2024-08-09 | [
[
"Li",
"Wan",
""
],
[
"Zhong",
"Xinyun",
""
],
[
"Li",
"Wei",
""
],
[
"Zhang",
"Song",
""
],
[
"Rong",
"Moheng",
""
],
[
"Xi",
"Yan",
""
],
[
"Yuan",
"Peng",
""
],
[
"Wang",
"Zechen",
""
],
[
"Jiang",
"Xiaolei",
""
],
[
"Yi",
"Rongxi",
""
],
[
"Tang",
"Hui",
""
],
[
"Chen",
"Yang",
""
],
[
"Tong",
"Chaohui",
""
],
[
"Wu",
"Zhan",
""
],
[
"Wang",
"Feng",
""
]
] | Currently, lung cancer is a leading cause of global cancer mortality, often necessitating minimally invasive interventions. Microwave ablation (MWA) is extensively utilized for both primary and secondary lung tumors. Although numerous clinical guidelines and standards for MWA have been established, the clinical evaluation of ablation surgery remains challenging and requires long-term patient follow-up for confirmation. In this paper, we propose a method termed respiratory subtraction to evaluate lung tumor ablation therapy performance based on pre- and post-operative image guidance. Initially, preoperative images undergo coarse rigid registration to their corresponding postoperative positions, followed by further non-rigid registration. Subsequently, subtraction images are generated by subtracting the registered preoperative images from the postoperative ones. Furthermore, to enhance the clinical assessment of MWA treatment performance, we devise a quantitative analysis metric to evaluate ablation efficacy by comparing differences between tumor areas and treatment areas. To the best of our knowledge, this is the pioneering work in the field to facilitate the assessment of MWA surgery performance on pulmonary tumors. Extensive experiments involving 35 clinical cases further validate the efficacy of the respiratory subtraction method. The experimental results confirm the effectiveness of the respiratory subtraction method and the proposed quantitative evaluation metric in assessing lung tumor treatment. |
2012.09692 | Marc Franco-Salvador | Sanja \v{S}tajner, Seren Yenikent and Marc Franco-Salvador | Five Psycholinguistic Characteristics for Better Interaction with Users | 26 pages, 4 figures | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | When two people pay attention to each other and are interested in what the
other has to say or write, they almost instantly adapt their writing/speaking
style to match the other. For a successful interaction with a user, chatbots
and dialogue systems should be able to do the same. We propose a framework
consisting of five psycholinguistic textual characteristics for better
human-computer interaction. We describe the annotation processes used for
collecting the data, and benchmark five binary classification tasks,
experimenting with different training sizes and model architectures. The best
architectures noticeably outperform several baselines and achieve
macro-averaged F$_1$-scores between 72\% and 96\% depending on the language and
the task. The proposed framework proved to be fairly easy to model for various
languages even with small amount of manually annotated data if right
architectures are used.
| [
{
"created": "Thu, 17 Dec 2020 16:00:08 GMT",
"version": "v1"
},
{
"created": "Fri, 18 Dec 2020 10:43:04 GMT",
"version": "v2"
},
{
"created": "Fri, 8 Jan 2021 10:01:49 GMT",
"version": "v3"
},
{
"created": "Wed, 13 Jan 2021 10:06:15 GMT",
"version": "v4"
},
{
"created": "Mon, 21 Mar 2022 14:04:53 GMT",
"version": "v5"
}
] | 2022-03-22 | [
[
"Štajner",
"Sanja",
""
],
[
"Yenikent",
"Seren",
""
],
[
"Franco-Salvador",
"Marc",
""
]
] | When two people pay attention to each other and are interested in what the other has to say or write, they almost instantly adapt their writing/speaking style to match the other. For a successful interaction with a user, chatbots and dialogue systems should be able to do the same. We propose a framework consisting of five psycholinguistic textual characteristics for better human-computer interaction. We describe the annotation processes used for collecting the data, and benchmark five binary classification tasks, experimenting with different training sizes and model architectures. The best architectures noticeably outperform several baselines and achieve macro-averaged F$_1$-scores between 72\% and 96\% depending on the language and the task. The proposed framework proved to be fairly easy to model for various languages even with small amount of manually annotated data if right architectures are used. |
2306.02361 | Ruichun Ma | Ruichun Ma, R. Ivan Zelaya, Wenjun Hu | Softly, Deftly, Scrolls Unfurl Their Splendor: Rolling Flexible Surfaces
for Wideband Wireless | null | null | 10.1145/3570361.3592520 | null | cs.NI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | With new frequency bands opening up, emerging wireless IoT devices are
capitalizing on an increasingly divergent range of frequencies. However,
existing coverage provisioning practice is often tied to specific standards and
frequencies. There is little shareable wireless infrastructure for concurrent
links on different frequencies, across networks and standards. This paper
presents Scrolls, a frequency-tunable soft smart surface system to enhance
wideband, multi-network coverage. Scrolls' hardware comprises many rows of
rollable thin plastic film, each attached with flexible copper strips. When
rolled to different lengths, the copper strips act as wire antennas reflecting
signals on the corresponding frequencies. The surface control algorithm
determines the unrolled strip lengths for link enhancement by probing the
search space efficiently. We build a set of distributed, composable Scrolls
prototypes and deploy them in an office. Extensive evaluation shows that
Scrolls can adapt the antenna lengths effectively to provide link enhancement
across diverse standards on sub-6 GHz bands. For concurrent links on 900 MHz
(LoRa), 2.4 GHz (Wi-Fi), 3.7 GHz, and 5 GHz, Scrolls can provide received
signal strength gains to all links simultaneously, by a median of 4 dB and up
to 10 dB
| [
{
"created": "Sun, 4 Jun 2023 13:58:07 GMT",
"version": "v1"
}
] | 2023-06-06 | [
[
"Ma",
"Ruichun",
""
],
[
"Zelaya",
"R. Ivan",
""
],
[
"Hu",
"Wenjun",
""
]
] | With new frequency bands opening up, emerging wireless IoT devices are capitalizing on an increasingly divergent range of frequencies. However, existing coverage provisioning practice is often tied to specific standards and frequencies. There is little shareable wireless infrastructure for concurrent links on different frequencies, across networks and standards. This paper presents Scrolls, a frequency-tunable soft smart surface system to enhance wideband, multi-network coverage. Scrolls' hardware comprises many rows of rollable thin plastic film, each attached with flexible copper strips. When rolled to different lengths, the copper strips act as wire antennas reflecting signals on the corresponding frequencies. The surface control algorithm determines the unrolled strip lengths for link enhancement by probing the search space efficiently. We build a set of distributed, composable Scrolls prototypes and deploy them in an office. Extensive evaluation shows that Scrolls can adapt the antenna lengths effectively to provide link enhancement across diverse standards on sub-6 GHz bands. For concurrent links on 900 MHz (LoRa), 2.4 GHz (Wi-Fi), 3.7 GHz, and 5 GHz, Scrolls can provide received signal strength gains to all links simultaneously, by a median of 4 dB and up to 10 dB |
2205.15952 | Ankush Agarwal | Ankush Agarwal, Raj Gite, Shreya Laddha, Pushpak Bhattacharyya,
Satyanarayan Kar, Asif Ekbal, Prabhjit Thind, Rajesh Zele, Ravi Shankar | Knowledge Graph - Deep Learning: A Case Study in Question Answering in
Aviation Safety Domain | LREC 2022 Main Conference Accepted Paper | null | null | null | cs.CL cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the commercial aviation domain, there are a large number of documents,
like, accident reports (NTSB, ASRS) and regulatory directives (ADs). There is a
need for a system to access these diverse repositories efficiently in order to
service needs in the aviation industry, like maintenance, compliance, and
safety. In this paper, we propose a Knowledge Graph (KG) guided Deep Learning
(DL) based Question Answering (QA) system for aviation safety. We construct a
Knowledge Graph from Aircraft Accident reports and contribute this resource to
the community of researchers. The efficacy of this resource is tested and
proved by the aforesaid QA system. Natural Language Queries constructed from
the documents mentioned above are converted into SPARQL (the interface language
of the RDF graph database) queries and answered. On the DL side, we have two
different QA models: (i) BERT QA which is a pipeline of Passage Retrieval
(Sentence-BERT based) and Question Answering (BERT based), and (ii) the
recently released GPT-3. We evaluate our system on a set of queries created
from the accident reports. Our combined QA system achieves 9.3% increase in
accuracy over GPT-3 and 40.3% increase over BERT QA. Thus, we infer that KG-DL
performs better than either singly.
| [
{
"created": "Tue, 31 May 2022 16:49:55 GMT",
"version": "v1"
},
{
"created": "Thu, 9 Jun 2022 18:50:18 GMT",
"version": "v2"
}
] | 2022-06-13 | [
[
"Agarwal",
"Ankush",
""
],
[
"Gite",
"Raj",
""
],
[
"Laddha",
"Shreya",
""
],
[
"Bhattacharyya",
"Pushpak",
""
],
[
"Kar",
"Satyanarayan",
""
],
[
"Ekbal",
"Asif",
""
],
[
"Thind",
"Prabhjit",
""
],
[
"Zele",
"Rajesh",
""
],
[
"Shankar",
"Ravi",
""
]
] | In the commercial aviation domain, there are a large number of documents, like, accident reports (NTSB, ASRS) and regulatory directives (ADs). There is a need for a system to access these diverse repositories efficiently in order to service needs in the aviation industry, like maintenance, compliance, and safety. In this paper, we propose a Knowledge Graph (KG) guided Deep Learning (DL) based Question Answering (QA) system for aviation safety. We construct a Knowledge Graph from Aircraft Accident reports and contribute this resource to the community of researchers. The efficacy of this resource is tested and proved by the aforesaid QA system. Natural Language Queries constructed from the documents mentioned above are converted into SPARQL (the interface language of the RDF graph database) queries and answered. On the DL side, we have two different QA models: (i) BERT QA which is a pipeline of Passage Retrieval (Sentence-BERT based) and Question Answering (BERT based), and (ii) the recently released GPT-3. We evaluate our system on a set of queries created from the accident reports. Our combined QA system achieves 9.3% increase in accuracy over GPT-3 and 40.3% increase over BERT QA. Thus, we infer that KG-DL performs better than either singly. |
1202.6352 | Matthias Baaz | Matthias Baaz (Department of Discrete Mathematics and Geometry, TU
Vienna), Agata Ciabattoni (Department of Computer Languages, TU Vienna),
Christian G Ferm\"uller (Department of Computer Languages, TU Vienna) | Theorem proving for prenex G\"odel logic with Delta: checking validity
and unsatisfiability | 23 pages, accepted for LMCS (Logical Methods in Computer Science) | Logical Methods in Computer Science, Volume 8, Issue 1 (March 6,
2012) lmcs:833 | 10.2168/LMCS-8(1:20)2012 | null | cs.LO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | G\"odel logic with the projection operator Delta (G_Delta) is an important
many-valued as well as intermediate logic. In contrast to classical logic, the
validity and the satisfiability problems of G_Delta are not directly dual to
each other. We nevertheless provide a uniform, computational treatment of both
problems for prenex formulas by describing appropriate translations into sets
of order clauses that can be subjected to chaining resolution. For validity a
version of Herbrand's Theorem allows us to show the soundness of standard
Skolemization. For satisfiability the translation involves a novel, extended
Skolemization method.
| [
{
"created": "Tue, 28 Feb 2012 20:38:20 GMT",
"version": "v1"
},
{
"created": "Mon, 5 Mar 2012 15:58:01 GMT",
"version": "v2"
}
] | 2015-07-01 | [
[
"Baaz",
"Matthias",
"",
"Department of Discrete Mathematics and Geometry, TU\n Vienna"
],
[
"Ciabattoni",
"Agata",
"",
"Department of Computer Languages, TU Vienna"
],
[
"Fermüller",
"Christian G",
"",
"Department of Computer Languages, TU Vienna"
]
] | G\"odel logic with the projection operator Delta (G_Delta) is an important many-valued as well as intermediate logic. In contrast to classical logic, the validity and the satisfiability problems of G_Delta are not directly dual to each other. We nevertheless provide a uniform, computational treatment of both problems for prenex formulas by describing appropriate translations into sets of order clauses that can be subjected to chaining resolution. For validity a version of Herbrand's Theorem allows us to show the soundness of standard Skolemization. For satisfiability the translation involves a novel, extended Skolemization method. |
2310.14189 | Yang Song | Yang Song and Prafulla Dhariwal | Improved Techniques for Training Consistency Models | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Consistency models are a nascent family of generative models that can sample
high quality data in one step without the need for adversarial training.
Current consistency models achieve optimal sample quality by distilling from
pre-trained diffusion models and employing learned metrics such as LPIPS.
However, distillation limits the quality of consistency models to that of the
pre-trained diffusion model, and LPIPS causes undesirable bias in evaluation.
To tackle these challenges, we present improved techniques for consistency
training, where consistency models learn directly from data without
distillation. We delve into the theory behind consistency training and identify
a previously overlooked flaw, which we address by eliminating Exponential
Moving Average from the teacher consistency model. To replace learned metrics
like LPIPS, we adopt Pseudo-Huber losses from robust statistics. Additionally,
we introduce a lognormal noise schedule for the consistency training objective,
and propose to double total discretization steps every set number of training
iterations. Combined with better hyperparameter tuning, these modifications
enable consistency models to achieve FID scores of 2.51 and 3.25 on CIFAR-10
and ImageNet $64\times 64$ respectively in a single sampling step. These scores
mark a 3.5$\times$ and 4$\times$ improvement compared to prior consistency
training approaches. Through two-step sampling, we further reduce FID scores to
2.24 and 2.77 on these two datasets, surpassing those obtained via distillation
in both one-step and two-step settings, while narrowing the gap between
consistency models and other state-of-the-art generative models.
| [
{
"created": "Sun, 22 Oct 2023 05:33:38 GMT",
"version": "v1"
}
] | 2023-10-24 | [
[
"Song",
"Yang",
""
],
[
"Dhariwal",
"Prafulla",
""
]
] | Consistency models are a nascent family of generative models that can sample high quality data in one step without the need for adversarial training. Current consistency models achieve optimal sample quality by distilling from pre-trained diffusion models and employing learned metrics such as LPIPS. However, distillation limits the quality of consistency models to that of the pre-trained diffusion model, and LPIPS causes undesirable bias in evaluation. To tackle these challenges, we present improved techniques for consistency training, where consistency models learn directly from data without distillation. We delve into the theory behind consistency training and identify a previously overlooked flaw, which we address by eliminating Exponential Moving Average from the teacher consistency model. To replace learned metrics like LPIPS, we adopt Pseudo-Huber losses from robust statistics. Additionally, we introduce a lognormal noise schedule for the consistency training objective, and propose to double total discretization steps every set number of training iterations. Combined with better hyperparameter tuning, these modifications enable consistency models to achieve FID scores of 2.51 and 3.25 on CIFAR-10 and ImageNet $64\times 64$ respectively in a single sampling step. These scores mark a 3.5$\times$ and 4$\times$ improvement compared to prior consistency training approaches. Through two-step sampling, we further reduce FID scores to 2.24 and 2.77 on these two datasets, surpassing those obtained via distillation in both one-step and two-step settings, while narrowing the gap between consistency models and other state-of-the-art generative models. |
2002.09554 | Ziyuan Liu | Ziyuan Liu, Dongheui Lee, Wolfgang Sepp | Particle Filter Based Monocular Human Tracking with a 3D Cardbox Model
and a Novel Deterministic Resampling Strategy | null | null | null | null | cs.CV cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The challenge of markerless human motion tracking is the high dimensionality
of the search space. Thus, efficient exploration in the search space is of
great significance. In this paper, a motion capturing algorithm is proposed for
upper body motion tracking. The proposed system tracks human motion based on
monocular silhouette-matching, and it is built on the top of a hierarchical
particle filter, within which a novel deterministic resampling strategy (DRS)
is applied. The proposed system is evaluated quantitatively with the ground
truth data measured by an inertial sensor system. In addition, we compare the
DRS with the stratified resampling strategy (SRS). It is shown in experiments
that DRS outperforms SRS with the same amount of particles. Moreover, a new 3D
articulated human upper body model with the name 3D cardbox model is created
and is proven to work successfully for motion tracking. Experiments show that
the proposed system can robustly track upper body motion without
self-occlusion. Motions towards the camera can also be well tracked.
| [
{
"created": "Fri, 21 Feb 2020 21:21:58 GMT",
"version": "v1"
}
] | 2020-02-25 | [
[
"Liu",
"Ziyuan",
""
],
[
"Lee",
"Dongheui",
""
],
[
"Sepp",
"Wolfgang",
""
]
] | The challenge of markerless human motion tracking is the high dimensionality of the search space. Thus, efficient exploration in the search space is of great significance. In this paper, a motion capturing algorithm is proposed for upper body motion tracking. The proposed system tracks human motion based on monocular silhouette-matching, and it is built on the top of a hierarchical particle filter, within which a novel deterministic resampling strategy (DRS) is applied. The proposed system is evaluated quantitatively with the ground truth data measured by an inertial sensor system. In addition, we compare the DRS with the stratified resampling strategy (SRS). It is shown in experiments that DRS outperforms SRS with the same amount of particles. Moreover, a new 3D articulated human upper body model with the name 3D cardbox model is created and is proven to work successfully for motion tracking. Experiments show that the proposed system can robustly track upper body motion without self-occlusion. Motions towards the camera can also be well tracked. |
2207.06574 | William Mansky | William Mansky | Bringing Iris into the Verified Software Toolchain | 21 pages, 4 figures | null | null | null | cs.PL | http://creativecommons.org/licenses/by-nc-sa/4.0/ | The Verified Software Toolchain (VST) is a system for proving correctness of
C programs using separation logic. By connecting to the verified compiler
CompCert, it produces the strongest possible guarantees of correctness for real
C code that we can compile and run. VST included concurrency from its
inception, in the form of reasoning about lock invariants, but concurrent
separation logic (CSL) has advanced by leaps and bounds since then. In this
paper, we describe efforts to integrate advancements from Iris, a
state-of-the-art mechanized CSL, into VST. Some features of Iris (ghost state
and invariants) are re-implemented in VST from the ground up; others (Iris
Proof Mode) are imported from the Iris development; still others (proof rules
for atomic operations) are axiomatized, with the hope that they will be made
foundational in future versions. The result is a system that can prove
correctness of sophisticated concurrent programs implemented in C, with
fine-grained locking and non-blocking atomic operations, that yields varying
soundness guarantees depending on the features used.
| [
{
"created": "Thu, 14 Jul 2022 00:34:52 GMT",
"version": "v1"
},
{
"created": "Fri, 15 Jul 2022 15:48:22 GMT",
"version": "v2"
}
] | 2022-07-18 | [
[
"Mansky",
"William",
""
]
] | The Verified Software Toolchain (VST) is a system for proving correctness of C programs using separation logic. By connecting to the verified compiler CompCert, it produces the strongest possible guarantees of correctness for real C code that we can compile and run. VST included concurrency from its inception, in the form of reasoning about lock invariants, but concurrent separation logic (CSL) has advanced by leaps and bounds since then. In this paper, we describe efforts to integrate advancements from Iris, a state-of-the-art mechanized CSL, into VST. Some features of Iris (ghost state and invariants) are re-implemented in VST from the ground up; others (Iris Proof Mode) are imported from the Iris development; still others (proof rules for atomic operations) are axiomatized, with the hope that they will be made foundational in future versions. The result is a system that can prove correctness of sophisticated concurrent programs implemented in C, with fine-grained locking and non-blocking atomic operations, that yields varying soundness guarantees depending on the features used. |
1205.1331 | Thomas Kesselheim | Thomas Kesselheim | Approximation Algorithms for Wireless Link Scheduling with Flexible Data
Rates | null | null | null | null | cs.NI cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider scheduling problems in wireless networks with respect to flexible
data rates. That is, more or less data can be transmitted per time depending on
the signal quality, which is determined by the
signal-to-interference-plus-noise ratio (SINR). Each wireless link has a
utility function mapping SINR values to the respective data rates. We have to
decide which transmissions are performed simultaneously and (depending on the
problem variant) also which transmission powers are used.
In the capacity-maximization problem, one strives to maximize the overall
network throughput, i.e., the summed utility of all links. For arbitrary
utility functions (not necessarily continuous ones), we present an O(log
n)-approximation when having n communication requests. This algorithm is built
on a constant-factor approximation for the special case of the respective
problem where utility functions only consist of a single step. In other words,
each link has an individual threshold and we aim at maximizing the number of
links whose threshold is satisfied. On the way, this improves the result in
[Kesselheim, SODA 2011] by not only extending it to individual thresholds but
also showing a constant approximation factor independent of assumptions on the
underlying metric space or the network parameters.
In addition, we consider the latency-minimization problem. Here, each link
has a demand, e.g., representing an amount of data. We have to compute a
schedule of shortest possible length such that for each link the demand is
fulfilled, that is the overall summed utility (or data transferred) is at least
as large as its demand. Based on the capacity-maximization algorithm, we show
an O(log^2 n)-approximation for this problem.
| [
{
"created": "Mon, 7 May 2012 10:18:24 GMT",
"version": "v1"
}
] | 2012-05-08 | [
[
"Kesselheim",
"Thomas",
""
]
] | We consider scheduling problems in wireless networks with respect to flexible data rates. That is, more or less data can be transmitted per time depending on the signal quality, which is determined by the signal-to-interference-plus-noise ratio (SINR). Each wireless link has a utility function mapping SINR values to the respective data rates. We have to decide which transmissions are performed simultaneously and (depending on the problem variant) also which transmission powers are used. In the capacity-maximization problem, one strives to maximize the overall network throughput, i.e., the summed utility of all links. For arbitrary utility functions (not necessarily continuous ones), we present an O(log n)-approximation when having n communication requests. This algorithm is built on a constant-factor approximation for the special case of the respective problem where utility functions only consist of a single step. In other words, each link has an individual threshold and we aim at maximizing the number of links whose threshold is satisfied. On the way, this improves the result in [Kesselheim, SODA 2011] by not only extending it to individual thresholds but also showing a constant approximation factor independent of assumptions on the underlying metric space or the network parameters. In addition, we consider the latency-minimization problem. Here, each link has a demand, e.g., representing an amount of data. We have to compute a schedule of shortest possible length such that for each link the demand is fulfilled, that is the overall summed utility (or data transferred) is at least as large as its demand. Based on the capacity-maximization algorithm, we show an O(log^2 n)-approximation for this problem. |
2407.21264 | Alimohammad Beigi | Alimohammad Beigi, Zhen Tan, Nivedh Mudiam, Canyu Chen, Kai Shu and
Huan Liu | Model Attribution in LLM-Generated Disinformation: A Domain
Generalization Approach with Supervised Contrastive Learning | 10 pages, 2 figures, accepted at DSAA 2024 | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Model attribution for LLM-generated disinformation poses a significant
challenge in understanding its origins and mitigating its spread. This task is
especially challenging because modern large language models (LLMs) produce
disinformation with human-like quality. Additionally, the diversity in
prompting methods used to generate disinformation complicates accurate source
attribution. These methods introduce domain-specific features that can mask the
fundamental characteristics of the models. In this paper, we introduce the
concept of model attribution as a domain generalization problem, where each
prompting method represents a unique domain. We argue that an effective
attribution model must be invariant to these domain-specific features. It
should also be proficient in identifying the originating models across all
scenarios, reflecting real-world detection challenges. To address this, we
introduce a novel approach based on Supervised Contrastive Learning. This
method is designed to enhance the model's robustness to variations in prompts
and focuses on distinguishing between different source LLMs. We evaluate our
model through rigorous experiments involving three common prompting methods:
``open-ended'', ``rewriting'', and ``paraphrasing'', and three advanced LLMs:
``llama 2'', ``chatgpt'', and ``vicuna''. Our results demonstrate the
effectiveness of our approach in model attribution tasks, achieving
state-of-the-art performance across diverse and unseen datasets.
| [
{
"created": "Wed, 31 Jul 2024 00:56:09 GMT",
"version": "v1"
},
{
"created": "Wed, 14 Aug 2024 08:10:43 GMT",
"version": "v2"
}
] | 2024-08-15 | [
[
"Beigi",
"Alimohammad",
""
],
[
"Tan",
"Zhen",
""
],
[
"Mudiam",
"Nivedh",
""
],
[
"Chen",
"Canyu",
""
],
[
"Shu",
"Kai",
""
],
[
"Liu",
"Huan",
""
]
] | Model attribution for LLM-generated disinformation poses a significant challenge in understanding its origins and mitigating its spread. This task is especially challenging because modern large language models (LLMs) produce disinformation with human-like quality. Additionally, the diversity in prompting methods used to generate disinformation complicates accurate source attribution. These methods introduce domain-specific features that can mask the fundamental characteristics of the models. In this paper, we introduce the concept of model attribution as a domain generalization problem, where each prompting method represents a unique domain. We argue that an effective attribution model must be invariant to these domain-specific features. It should also be proficient in identifying the originating models across all scenarios, reflecting real-world detection challenges. To address this, we introduce a novel approach based on Supervised Contrastive Learning. This method is designed to enhance the model's robustness to variations in prompts and focuses on distinguishing between different source LLMs. We evaluate our model through rigorous experiments involving three common prompting methods: ``open-ended'', ``rewriting'', and ``paraphrasing'', and three advanced LLMs: ``llama 2'', ``chatgpt'', and ``vicuna''. Our results demonstrate the effectiveness of our approach in model attribution tasks, achieving state-of-the-art performance across diverse and unseen datasets. |
2012.13227 | Rahul Bhadani | Rahul Bhadani | Path Planning of Unmanned System using Carrot-chasing Algorithm | null | null | null | null | cs.RO cs.SY eess.SY | http://creativecommons.org/licenses/by/4.0/ | When an unmanned system is launched for a mission-critical task, it is
required to follow a predetermined path. It means the unmanned system requires
a path following algorithm for the completion of the mission. Since the
predetermined path is typically given by a set of data-points, not only the
curvature and derivative of the pre-determined path are absent, but also it
requires a large size of on-board memory. In this work, we study a simple path
following algorithm called Carrot-chasing algorithm that uses a simple
controller in the form of a proportional controller to control the movement of
an unmanned system.
| [
{
"created": "Thu, 24 Dec 2020 12:47:37 GMT",
"version": "v1"
}
] | 2020-12-25 | [
[
"Bhadani",
"Rahul",
""
]
] | When an unmanned system is launched for a mission-critical task, it is required to follow a predetermined path. It means the unmanned system requires a path following algorithm for the completion of the mission. Since the predetermined path is typically given by a set of data-points, not only the curvature and derivative of the pre-determined path are absent, but also it requires a large size of on-board memory. In this work, we study a simple path following algorithm called Carrot-chasing algorithm that uses a simple controller in the form of a proportional controller to control the movement of an unmanned system. |
2310.01415 | Jiageng Mao | Jiageng Mao, Yuxi Qian, Junjie Ye, Hang Zhao, Yue Wang | GPT-Driver: Learning to Drive with GPT | NeurIPS 2023 Foundation Models for Decision Making Workshop | null | null | null | cs.CV cs.AI cs.CL cs.RO | http://creativecommons.org/licenses/by/4.0/ | We present a simple yet effective approach that can transform the OpenAI
GPT-3.5 model into a reliable motion planner for autonomous vehicles. Motion
planning is a core challenge in autonomous driving, aiming to plan a driving
trajectory that is safe and comfortable. Existing motion planners predominantly
leverage heuristic methods to forecast driving trajectories, yet these
approaches demonstrate insufficient generalization capabilities in the face of
novel and unseen driving scenarios. In this paper, we propose a novel approach
to motion planning that capitalizes on the strong reasoning capabilities and
generalization potential inherent to Large Language Models (LLMs). The
fundamental insight of our approach is the reformulation of motion planning as
a language modeling problem, a perspective not previously explored.
Specifically, we represent the planner inputs and outputs as language tokens,
and leverage the LLM to generate driving trajectories through a language
description of coordinate positions. Furthermore, we propose a novel
prompting-reasoning-finetuning strategy to stimulate the numerical reasoning
potential of the LLM. With this strategy, the LLM can describe highly precise
trajectory coordinates and also its internal decision-making process in natural
language. We evaluate our approach on the large-scale nuScenes dataset, and
extensive experiments substantiate the effectiveness, generalization ability,
and interpretability of our GPT-based motion planner. Code is now available at
https://github.com/PointsCoder/GPT-Driver.
| [
{
"created": "Mon, 2 Oct 2023 17:59:57 GMT",
"version": "v1"
},
{
"created": "Mon, 16 Oct 2023 18:33:10 GMT",
"version": "v2"
},
{
"created": "Tue, 5 Dec 2023 05:26:29 GMT",
"version": "v3"
}
] | 2023-12-06 | [
[
"Mao",
"Jiageng",
""
],
[
"Qian",
"Yuxi",
""
],
[
"Ye",
"Junjie",
""
],
[
"Zhao",
"Hang",
""
],
[
"Wang",
"Yue",
""
]
] | We present a simple yet effective approach that can transform the OpenAI GPT-3.5 model into a reliable motion planner for autonomous vehicles. Motion planning is a core challenge in autonomous driving, aiming to plan a driving trajectory that is safe and comfortable. Existing motion planners predominantly leverage heuristic methods to forecast driving trajectories, yet these approaches demonstrate insufficient generalization capabilities in the face of novel and unseen driving scenarios. In this paper, we propose a novel approach to motion planning that capitalizes on the strong reasoning capabilities and generalization potential inherent to Large Language Models (LLMs). The fundamental insight of our approach is the reformulation of motion planning as a language modeling problem, a perspective not previously explored. Specifically, we represent the planner inputs and outputs as language tokens, and leverage the LLM to generate driving trajectories through a language description of coordinate positions. Furthermore, we propose a novel prompting-reasoning-finetuning strategy to stimulate the numerical reasoning potential of the LLM. With this strategy, the LLM can describe highly precise trajectory coordinates and also its internal decision-making process in natural language. We evaluate our approach on the large-scale nuScenes dataset, and extensive experiments substantiate the effectiveness, generalization ability, and interpretability of our GPT-based motion planner. Code is now available at https://github.com/PointsCoder/GPT-Driver. |
1708.03324 | Mohammad Dehghani Soltani | Mohammad Dehghani Soltani, Xiping Wu, Majid Safari, and Harald Haas | Bidirectional User Throughput Maximization Based on Feedback Reduction
in LiFi Networks | 30 pages, 9 figures, submitted to IEEE Transactions on Communications | null | null | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Channel adaptive signalling, which is based on feedback, can result in almost
any performance metric enhancement. Unlike the radio frequency (RF) channel,
the optical wireless communications (OWCs) channel is fairly static. This
feature enables a potential improvement of the bidirectional user throughput by
reducing the amount of feedback. Light-Fidelity (LiFi) is a subset of OWCs, and
it is a bidirectional, high-speed and fully networked wireless communication
technology where visible light and infrared are used in downlink and uplink
respectively. In this paper, two techniques for reducing the amount of feedback
in LiFi cellular networks are proposed, i) Limited-content feedback (LCF)
scheme based on reducing the content of feedback information and ii)
Limited-frequency feedback (LFF) based on the update interval scheme that lets
the receiver to transmit feedback information after some data frames
transmission. Furthermore, based on the random waypoint (RWP) mobility model,
the optimum update interval which provides maximum bidirectional user equipment
(UE) throughput, has been derived. Results show that the proposed schemes can
achieve better average overall throughput compared to the benchmark one-bit
feedback and full-feedback mechanisms.
| [
{
"created": "Thu, 10 Aug 2017 11:06:10 GMT",
"version": "v1"
}
] | 2017-08-14 | [
[
"Soltani",
"Mohammad Dehghani",
""
],
[
"Wu",
"Xiping",
""
],
[
"Safari",
"Majid",
""
],
[
"Haas",
"Harald",
""
]
] | Channel adaptive signalling, which is based on feedback, can result in almost any performance metric enhancement. Unlike the radio frequency (RF) channel, the optical wireless communications (OWCs) channel is fairly static. This feature enables a potential improvement of the bidirectional user throughput by reducing the amount of feedback. Light-Fidelity (LiFi) is a subset of OWCs, and it is a bidirectional, high-speed and fully networked wireless communication technology where visible light and infrared are used in downlink and uplink respectively. In this paper, two techniques for reducing the amount of feedback in LiFi cellular networks are proposed, i) Limited-content feedback (LCF) scheme based on reducing the content of feedback information and ii) Limited-frequency feedback (LFF) based on the update interval scheme that lets the receiver to transmit feedback information after some data frames transmission. Furthermore, based on the random waypoint (RWP) mobility model, the optimum update interval which provides maximum bidirectional user equipment (UE) throughput, has been derived. Results show that the proposed schemes can achieve better average overall throughput compared to the benchmark one-bit feedback and full-feedback mechanisms. |
1911.07418 | Dian Ang Yap | Dian Ang Yap, Nicholas Roberts, Vinay Uday Prabhu | Grassmannian Packings in Neural Networks: Learning with Maximal Subspace
Packings for Diversity and Anti-Sparsity | Presented at Bayesian Deep Learning and Workshop on Information
Theory and Machine Learning, 33rd Conference on Neural Information
ProcessingSystems (NeurIPS 2019), Vancouver, Canada | null | null | null | cs.LG cs.IT math.IT stat.ML | http://creativecommons.org/publicdomain/zero/1.0/ | Kernel sparsity ("dying ReLUs") and lack of diversity are commonly observed
in CNN kernels, which decreases model capacity. Drawing inspiration from
information theory and wireless communications, we demonstrate the intersection
of coding theory and deep learning through the Grassmannian subspace packing
problem in CNNs. We propose Grassmannian packings for initial kernel layers to
be initialized maximally far apart based on chordal or Fubini-Study distance.
Convolutional kernels initialized with Grassmannian packings exhibit diverse
features and obtain diverse representations. We show that Grassmannian
packings, especially in the initial layers, address kernel sparsity and
encourage diversity, while improving classification accuracy across shallow and
deep CNNs with better convergence rates.
| [
{
"created": "Mon, 18 Nov 2019 04:17:06 GMT",
"version": "v1"
}
] | 2019-11-19 | [
[
"Yap",
"Dian Ang",
""
],
[
"Roberts",
"Nicholas",
""
],
[
"Prabhu",
"Vinay Uday",
""
]
] | Kernel sparsity ("dying ReLUs") and lack of diversity are commonly observed in CNN kernels, which decreases model capacity. Drawing inspiration from information theory and wireless communications, we demonstrate the intersection of coding theory and deep learning through the Grassmannian subspace packing problem in CNNs. We propose Grassmannian packings for initial kernel layers to be initialized maximally far apart based on chordal or Fubini-Study distance. Convolutional kernels initialized with Grassmannian packings exhibit diverse features and obtain diverse representations. We show that Grassmannian packings, especially in the initial layers, address kernel sparsity and encourage diversity, while improving classification accuracy across shallow and deep CNNs with better convergence rates. |
2103.03700 | Alexander Sutherland | A. Sutherland, S. Magg, C. Weber, S. Wermter | Analyzing the Influence of Dataset Composition for Emotion Recognition | 2 pages, 2 figures, presented at IROS 2018 Workshop on Language and
Robotics | null | null | null | cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Recognizing emotions from text in multimodal architectures has yielded
promising results, surpassing video and audio modalities under certain
circumstances. However, the method by which multimodal data is collected can be
significant for recognizing emotional features in language. In this paper, we
address the influence data collection methodology has on two multimodal emotion
recognition datasets, the IEMOCAP dataset and the OMG-Emotion Behavior dataset,
by analyzing textual dataset compositions and emotion recognition accuracy.
Experiments with the full IEMOCAP dataset indicate that the composition
negatively influences generalization performance when compared to the
OMG-Emotion Behavior dataset. We conclude by discussing the impact this may
have on HRI experiments.
| [
{
"created": "Fri, 5 Mar 2021 14:20:59 GMT",
"version": "v1"
}
] | 2021-03-08 | [
[
"Sutherland",
"A.",
""
],
[
"Magg",
"S.",
""
],
[
"Weber",
"C.",
""
],
[
"Wermter",
"S.",
""
]
] | Recognizing emotions from text in multimodal architectures has yielded promising results, surpassing video and audio modalities under certain circumstances. However, the method by which multimodal data is collected can be significant for recognizing emotional features in language. In this paper, we address the influence data collection methodology has on two multimodal emotion recognition datasets, the IEMOCAP dataset and the OMG-Emotion Behavior dataset, by analyzing textual dataset compositions and emotion recognition accuracy. Experiments with the full IEMOCAP dataset indicate that the composition negatively influences generalization performance when compared to the OMG-Emotion Behavior dataset. We conclude by discussing the impact this may have on HRI experiments. |
2309.11177 | Qian Zhao | Qian Zhao and Zhengwei Wu and Zhiqiang Zhang and Jun Zhou | Long-tail Augmented Graph Contrastive Learning for Recommendation | 17 pages, 6 figures, accepted by ECML/PKDD 2023 (European Conference
on Machine Learning and Principles and Practice of Knowledge Discovery in
Databases) | null | null | null | cs.IR cs.AI cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Graph Convolutional Networks (GCNs) has demonstrated promising results for
recommender systems, as they can effectively leverage high-order relationship.
However, these methods usually encounter data sparsity issue in real-world
scenarios. To address this issue, GCN-based recommendation methods employ
contrastive learning to introduce self-supervised signals. Despite their
effectiveness, these methods lack consideration of the significant degree
disparity between head and tail nodes. This can lead to non-uniform
representation distribution, which is a crucial factor for the performance of
contrastive learning methods. To tackle the above issue, we propose a novel
Long-tail Augmented Graph Contrastive Learning (LAGCL) method for
recommendation. Specifically, we introduce a learnable long-tail augmentation
approach to enhance tail nodes by supplementing predicted neighbor information,
and generate contrastive views based on the resulting augmented graph. To make
the data augmentation schema learnable, we design an auto drop module to
generate pseudo-tail nodes from head nodes and a knowledge transfer module to
reconstruct the head nodes from pseudo-tail nodes. Additionally, we employ
generative adversarial networks to ensure that the distribution of the
generated tail/head nodes matches that of the original tail/head nodes.
Extensive experiments conducted on three benchmark datasets demonstrate the
significant improvement in performance of our model over the state-of-the-arts.
Further analyses demonstrate the uniformity of learned representations and the
superiority of LAGCL on long-tail performance. Code is publicly available at
https://github.com/im0qianqian/LAGCL
| [
{
"created": "Wed, 20 Sep 2023 09:57:20 GMT",
"version": "v1"
}
] | 2023-09-21 | [
[
"Zhao",
"Qian",
""
],
[
"Wu",
"Zhengwei",
""
],
[
"Zhang",
"Zhiqiang",
""
],
[
"Zhou",
"Jun",
""
]
] | Graph Convolutional Networks (GCNs) has demonstrated promising results for recommender systems, as they can effectively leverage high-order relationship. However, these methods usually encounter data sparsity issue in real-world scenarios. To address this issue, GCN-based recommendation methods employ contrastive learning to introduce self-supervised signals. Despite their effectiveness, these methods lack consideration of the significant degree disparity between head and tail nodes. This can lead to non-uniform representation distribution, which is a crucial factor for the performance of contrastive learning methods. To tackle the above issue, we propose a novel Long-tail Augmented Graph Contrastive Learning (LAGCL) method for recommendation. Specifically, we introduce a learnable long-tail augmentation approach to enhance tail nodes by supplementing predicted neighbor information, and generate contrastive views based on the resulting augmented graph. To make the data augmentation schema learnable, we design an auto drop module to generate pseudo-tail nodes from head nodes and a knowledge transfer module to reconstruct the head nodes from pseudo-tail nodes. Additionally, we employ generative adversarial networks to ensure that the distribution of the generated tail/head nodes matches that of the original tail/head nodes. Extensive experiments conducted on three benchmark datasets demonstrate the significant improvement in performance of our model over the state-of-the-arts. Further analyses demonstrate the uniformity of learned representations and the superiority of LAGCL on long-tail performance. Code is publicly available at https://github.com/im0qianqian/LAGCL |
2210.16508 | Yuhe Guo | Yuhe Guo and Zhewei Wei | Clenshaw Graph Neural Networks | 10 pages, 2 figures | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Graph Convolutional Networks (GCNs), which use a message-passing paradigm
with stacked convolution layers, are foundational methods for learning graph
representations. Recent GCN models use various residual connection techniques
to alleviate the model degradation problem such as over-smoothing and gradient
vanishing. Existing residual connection techniques, however, fail to make
extensive use of underlying graph structure as in the graph spectral domain,
which is critical for obtaining satisfactory results on heterophilic graphs. In
this paper, we introduce ClenshawGCN, a GNN model that employs the Clenshaw
Summation Algorithm to enhance the expressiveness of the GCN model. ClenshawGCN
equips the standard GCN model with two straightforward residual modules: the
adaptive initial residual connection and the negative second-order residual
connection. We show that by adding these two residual modules, ClenshawGCN
implicitly simulates a polynomial filter under the Chebyshev basis, giving it
at least as much expressive power as polynomial spectral GNNs. In addition, we
conduct comprehensive experiments to demonstrate the superiority of our model
over spatial and spectral GNN models.
| [
{
"created": "Sat, 29 Oct 2022 06:32:39 GMT",
"version": "v1"
},
{
"created": "Thu, 3 Nov 2022 14:30:56 GMT",
"version": "v2"
}
] | 2022-11-04 | [
[
"Guo",
"Yuhe",
""
],
[
"Wei",
"Zhewei",
""
]
] | Graph Convolutional Networks (GCNs), which use a message-passing paradigm with stacked convolution layers, are foundational methods for learning graph representations. Recent GCN models use various residual connection techniques to alleviate the model degradation problem such as over-smoothing and gradient vanishing. Existing residual connection techniques, however, fail to make extensive use of underlying graph structure as in the graph spectral domain, which is critical for obtaining satisfactory results on heterophilic graphs. In this paper, we introduce ClenshawGCN, a GNN model that employs the Clenshaw Summation Algorithm to enhance the expressiveness of the GCN model. ClenshawGCN equips the standard GCN model with two straightforward residual modules: the adaptive initial residual connection and the negative second-order residual connection. We show that by adding these two residual modules, ClenshawGCN implicitly simulates a polynomial filter under the Chebyshev basis, giving it at least as much expressive power as polynomial spectral GNNs. In addition, we conduct comprehensive experiments to demonstrate the superiority of our model over spatial and spectral GNN models. |
1506.00677 | Katarzyna Paluch | Pratik Ghosal and Adam Kunysz and Katarzyna Paluch | Characterisation of Strongly Stable Matchings | null | null | null | null | cs.DS cs.GT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | An instance of a strongly stable matching problem (SSMP) is an undirected
bipartite graph $G=(A \cup B, E)$, with an adjacency list of each vertex being
a linearly ordered list of ties, which are subsets of vertices equally good for
a given vertex. Ties are disjoint and may contain one vertex. A matching $M$ is
a set of vertex-disjoint edges. An edge $(x,y) \in E \setminus M$ is a {\em
blocking edge} for $M$ if $x$ is either unmatched or strictly prefers $y$ to
its current partner in $M$, and $y$ is either unmatched or strictly prefers $x$
to its current partner in $M$ or is indifferent between them. A matching is
{\em strongly stable} if there is no blocking edge with respect to it. We
present an algorithm for the generation of all strongly stable matchings, thus
solving an open problem already stated in the book by Gusfield and Irving
\cite{GI}. It has previously been shown that strongly stable matchings form a
distributive lattice and although the number of strongly stable matchings can
be exponential in the number of vertices, we show that there exists a partial
order with $O(m)$ elements representing all strongly stable matchings, where
$m$ denotes the number of edges in the graph. We give two algorithms that
construct two such representations: one in $O(nm^2)$ time and the other in
$O(nm)$ time, where $n$ denotes the number of vertices in the graph. Note that
the construction of the second representation has the same time complexity as
that of computing a single strongly stable matching.
| [
{
"created": "Mon, 1 Jun 2015 21:25:24 GMT",
"version": "v1"
}
] | 2015-06-03 | [
[
"Ghosal",
"Pratik",
""
],
[
"Kunysz",
"Adam",
""
],
[
"Paluch",
"Katarzyna",
""
]
] | An instance of a strongly stable matching problem (SSMP) is an undirected bipartite graph $G=(A \cup B, E)$, with an adjacency list of each vertex being a linearly ordered list of ties, which are subsets of vertices equally good for a given vertex. Ties are disjoint and may contain one vertex. A matching $M$ is a set of vertex-disjoint edges. An edge $(x,y) \in E \setminus M$ is a {\em blocking edge} for $M$ if $x$ is either unmatched or strictly prefers $y$ to its current partner in $M$, and $y$ is either unmatched or strictly prefers $x$ to its current partner in $M$ or is indifferent between them. A matching is {\em strongly stable} if there is no blocking edge with respect to it. We present an algorithm for the generation of all strongly stable matchings, thus solving an open problem already stated in the book by Gusfield and Irving \cite{GI}. It has previously been shown that strongly stable matchings form a distributive lattice and although the number of strongly stable matchings can be exponential in the number of vertices, we show that there exists a partial order with $O(m)$ elements representing all strongly stable matchings, where $m$ denotes the number of edges in the graph. We give two algorithms that construct two such representations: one in $O(nm^2)$ time and the other in $O(nm)$ time, where $n$ denotes the number of vertices in the graph. Note that the construction of the second representation has the same time complexity as that of computing a single strongly stable matching. |
2305.09477 | Uwe Peters | Uwe Peters, Mary Carman | Unjustified Sample Sizes and Generalizations in Explainable AI Research:
Principles for More Inclusive User Studies | 3 tables, 1 figure | null | null | null | cs.AI cs.HC | http://creativecommons.org/licenses/by/4.0/ | Many ethical frameworks require artificial intelligence (AI) systems to be
explainable. Explainable AI (XAI) models are frequently tested for their
adequacy in user studies. Since different people may have different explanatory
needs, it is important that participant samples in user studies are large
enough to represent the target population to enable generalizations. However,
it is unclear to what extent XAI researchers reflect on and justify their
sample sizes or avoid broad generalizations across people. We analyzed XAI user
studies (n = 220) published between 2012 and 2022. Most studies did not offer
rationales for their sample sizes. Moreover, most papers generalized their
conclusions beyond their target population, and there was no evidence that
broader conclusions in quantitative studies were correlated with larger
samples. These methodological problems can impede evaluations of whether XAI
systems implement the explainability called for in ethical frameworks. We
outline principles for more inclusive XAI user studies.
| [
{
"created": "Mon, 8 May 2023 15:02:21 GMT",
"version": "v1"
},
{
"created": "Sat, 14 Oct 2023 19:20:20 GMT",
"version": "v2"
}
] | 2023-10-17 | [
[
"Peters",
"Uwe",
""
],
[
"Carman",
"Mary",
""
]
] | Many ethical frameworks require artificial intelligence (AI) systems to be explainable. Explainable AI (XAI) models are frequently tested for their adequacy in user studies. Since different people may have different explanatory needs, it is important that participant samples in user studies are large enough to represent the target population to enable generalizations. However, it is unclear to what extent XAI researchers reflect on and justify their sample sizes or avoid broad generalizations across people. We analyzed XAI user studies (n = 220) published between 2012 and 2022. Most studies did not offer rationales for their sample sizes. Moreover, most papers generalized their conclusions beyond their target population, and there was no evidence that broader conclusions in quantitative studies were correlated with larger samples. These methodological problems can impede evaluations of whether XAI systems implement the explainability called for in ethical frameworks. We outline principles for more inclusive XAI user studies. |
2301.08412 | Zhuo Zhao | Zhuo Zhao | Fair Credit Scorer through Bayesian Approach | null | null | null | null | cs.LG cs.CY | http://creativecommons.org/licenses/by/4.0/ | Machine learning currently plays an increasingly important role in people's
lives in areas such as credit scoring, auto-driving, disease diagnosing, and
insurance quoting. However, in many of these areas, machine learning models
have performed unfair behaviors against some sub-populations, such as some
particular groups of race, sex, and age. These unfair behaviors can be on
account of the pre-existing bias in the training dataset due to historical and
social factors. In this paper, we focus on a real-world application of credit
scoring and construct a fair prediction model by introducing latent variables
to remove the correlation between protected attributes, such as sex and age,
with the observable feature inputs, including house and job. For detailed
implementation, we apply Bayesian approaches, including the Markov Chain Monte
Carlo simulation, to estimate our proposed fair model.
| [
{
"created": "Fri, 20 Jan 2023 03:35:03 GMT",
"version": "v1"
}
] | 2023-01-23 | [
[
"Zhao",
"Zhuo",
""
]
] | Machine learning currently plays an increasingly important role in people's lives in areas such as credit scoring, auto-driving, disease diagnosing, and insurance quoting. However, in many of these areas, machine learning models have performed unfair behaviors against some sub-populations, such as some particular groups of race, sex, and age. These unfair behaviors can be on account of the pre-existing bias in the training dataset due to historical and social factors. In this paper, we focus on a real-world application of credit scoring and construct a fair prediction model by introducing latent variables to remove the correlation between protected attributes, such as sex and age, with the observable feature inputs, including house and job. For detailed implementation, we apply Bayesian approaches, including the Markov Chain Monte Carlo simulation, to estimate our proposed fair model. |
2311.03312 | Qitao Zhao | Qitao Zhao, Ce Zheng, Mengyuan Liu, Chen Chen | A Single 2D Pose with Context is Worth Hundreds for 3D Human Pose
Estimation | Accepted to NeurIPS 2023 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The dominant paradigm in 3D human pose estimation that lifts a 2D pose
sequence to 3D heavily relies on long-term temporal clues (i.e., using a
daunting number of video frames) for improved accuracy, which incurs
performance saturation, intractable computation and the non-causal problem.
This can be attributed to their inherent inability to perceive spatial context
as plain 2D joint coordinates carry no visual cues. To address this issue, we
propose a straightforward yet powerful solution: leveraging the readily
available intermediate visual representations produced by off-the-shelf
(pre-trained) 2D pose detectors -- no finetuning on the 3D task is even needed.
The key observation is that, while the pose detector learns to localize 2D
joints, such representations (e.g., feature maps) implicitly encode the
joint-centric spatial context thanks to the regional operations in backbone
networks. We design a simple baseline named Context-Aware PoseFormer to
showcase its effectiveness. Without access to any temporal information, the
proposed method significantly outperforms its context-agnostic counterpart,
PoseFormer, and other state-of-the-art methods using up to hundreds of video
frames regarding both speed and precision. Project page:
https://qitaozhao.github.io/ContextAware-PoseFormer
| [
{
"created": "Mon, 6 Nov 2023 18:04:13 GMT",
"version": "v1"
},
{
"created": "Thu, 9 Nov 2023 04:51:34 GMT",
"version": "v2"
}
] | 2023-11-10 | [
[
"Zhao",
"Qitao",
""
],
[
"Zheng",
"Ce",
""
],
[
"Liu",
"Mengyuan",
""
],
[
"Chen",
"Chen",
""
]
] | The dominant paradigm in 3D human pose estimation that lifts a 2D pose sequence to 3D heavily relies on long-term temporal clues (i.e., using a daunting number of video frames) for improved accuracy, which incurs performance saturation, intractable computation and the non-causal problem. This can be attributed to their inherent inability to perceive spatial context as plain 2D joint coordinates carry no visual cues. To address this issue, we propose a straightforward yet powerful solution: leveraging the readily available intermediate visual representations produced by off-the-shelf (pre-trained) 2D pose detectors -- no finetuning on the 3D task is even needed. The key observation is that, while the pose detector learns to localize 2D joints, such representations (e.g., feature maps) implicitly encode the joint-centric spatial context thanks to the regional operations in backbone networks. We design a simple baseline named Context-Aware PoseFormer to showcase its effectiveness. Without access to any temporal information, the proposed method significantly outperforms its context-agnostic counterpart, PoseFormer, and other state-of-the-art methods using up to hundreds of video frames regarding both speed and precision. Project page: https://qitaozhao.github.io/ContextAware-PoseFormer |
1909.05233 | Ankur Mali | Ankur Mali, Alexander Ororbia, C. Lee Giles | The Neural State Pushdown Automata | 10 pages, 7 Table, 1 figure | null | null | null | cs.NE cs.CL cs.LG | http://creativecommons.org/licenses/by/4.0/ | In order to learn complex grammars, recurrent neural networks (RNNs) require
sufficient computational resources to ensure correct grammar recognition. A
widely-used approach to expand model capacity would be to couple an RNN to an
external memory stack. Here, we introduce a "neural state" pushdown automaton
(NSPDA), which consists of a digital stack, instead of an analog one, that is
coupled to a neural network state machine. We empirically show its
effectiveness in recognizing various context-free grammars (CFGs). First, we
develop the underlying mechanics of the proposed higher order recurrent network
and its manipulation of a stack as well as how to stably program its underlying
pushdown automaton (PDA) to achieve desired finite-state network dynamics.
Next, we introduce a noise regularization scheme for higher-order (tensor)
networks, to our knowledge the first of its kind, and design an algorithm for
improved incremental learning. Finally, we design a method for inserting
grammar rules into a NSPDA and empirically show that this prior knowledge
improves its training convergence time by an order of magnitude and, in some
cases, leads to better generalization. The NSPDA is also compared to a
classical analog stack neural network pushdown automaton (NNPDA) as well as a
wide array of first and second-order RNNs with and without external memory,
trained using different learning algorithms. Our results show that, for Dyck(2)
languages, prior rule-based knowledge is critical for optimization convergence
and for ensuring generalization to longer sequences at test time. We observe
that many RNNs with and without memory, but no prior knowledge, fail to
converge and generalize poorly on CFGs.
| [
{
"created": "Sat, 7 Sep 2019 00:32:11 GMT",
"version": "v1"
},
{
"created": "Thu, 19 Sep 2019 20:12:33 GMT",
"version": "v2"
}
] | 2019-09-23 | [
[
"Mali",
"Ankur",
""
],
[
"Ororbia",
"Alexander",
""
],
[
"Giles",
"C. Lee",
""
]
] | In order to learn complex grammars, recurrent neural networks (RNNs) require sufficient computational resources to ensure correct grammar recognition. A widely-used approach to expand model capacity would be to couple an RNN to an external memory stack. Here, we introduce a "neural state" pushdown automaton (NSPDA), which consists of a digital stack, instead of an analog one, that is coupled to a neural network state machine. We empirically show its effectiveness in recognizing various context-free grammars (CFGs). First, we develop the underlying mechanics of the proposed higher order recurrent network and its manipulation of a stack as well as how to stably program its underlying pushdown automaton (PDA) to achieve desired finite-state network dynamics. Next, we introduce a noise regularization scheme for higher-order (tensor) networks, to our knowledge the first of its kind, and design an algorithm for improved incremental learning. Finally, we design a method for inserting grammar rules into a NSPDA and empirically show that this prior knowledge improves its training convergence time by an order of magnitude and, in some cases, leads to better generalization. The NSPDA is also compared to a classical analog stack neural network pushdown automaton (NNPDA) as well as a wide array of first and second-order RNNs with and without external memory, trained using different learning algorithms. Our results show that, for Dyck(2) languages, prior rule-based knowledge is critical for optimization convergence and for ensuring generalization to longer sequences at test time. We observe that many RNNs with and without memory, but no prior knowledge, fail to converge and generalize poorly on CFGs. |
2004.04139 | Xi Liang | Xi Liang, Zechao Shang, Aaron J. Elmore, Sanjay Krishnan, Michael J.
Franklin | Fast and Reliable Missing Data Contingency Analysis with
Predicate-Constraints | null | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Today, data analysts largely rely on intuition to determine whether missing
or withheld rows of a dataset significantly affect their analyses. We propose a
framework that can produce automatic contingency analysis, i.e., the range of
values an aggregate SQL query could take, under formal constraints describing
the variation and frequency of missing data tuples. We describe how to process
SUM, COUNT, AVG, MIN, and MAX queries in these conditions resulting in hard
error bounds with testable constraints. We propose an optimization algorithm
based on an integer program that reconciles a set of such constraints, even if
they are overlapping, conflicting, or unsatisfiable, into such bounds. Our
experiments on real-world datasets against several statistical imputation and
inference baselines show that statistical techniques can have a deceptively
high error rate that is often unpredictable. In contrast, our framework offers
hard bounds that are guaranteed to hold if the constraints are not violated. In
spite of these hard bounds, we show competitive accuracy to statistical
baselines.
| [
{
"created": "Wed, 8 Apr 2020 17:50:18 GMT",
"version": "v1"
}
] | 2020-04-09 | [
[
"Liang",
"Xi",
""
],
[
"Shang",
"Zechao",
""
],
[
"Elmore",
"Aaron J.",
""
],
[
"Krishnan",
"Sanjay",
""
],
[
"Franklin",
"Michael J.",
""
]
] | Today, data analysts largely rely on intuition to determine whether missing or withheld rows of a dataset significantly affect their analyses. We propose a framework that can produce automatic contingency analysis, i.e., the range of values an aggregate SQL query could take, under formal constraints describing the variation and frequency of missing data tuples. We describe how to process SUM, COUNT, AVG, MIN, and MAX queries in these conditions resulting in hard error bounds with testable constraints. We propose an optimization algorithm based on an integer program that reconciles a set of such constraints, even if they are overlapping, conflicting, or unsatisfiable, into such bounds. Our experiments on real-world datasets against several statistical imputation and inference baselines show that statistical techniques can have a deceptively high error rate that is often unpredictable. In contrast, our framework offers hard bounds that are guaranteed to hold if the constraints are not violated. In spite of these hard bounds, we show competitive accuracy to statistical baselines. |
1810.08988 | Nora Connor | Nora Connor and Aaron Clauset | Predicting the outcomes of policy diffusion from U.S. states to federal
law | null | null | null | null | cs.CY physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the United States, national policies often begin as state laws, which then
spread from state to state until they gain momentum to become enacted as a
national policy. However, not every state policy reaches the national level.
Previous work has suggested that state-level policies are more likely to become
national policies depending on their geographic origin, their category of
legislation, or some characteristic of their initiating states, such as wealth,
urbanicity, or ideological liberalism. Here, we tested these hypotheses by
divorcing the set of traits from the states' identities and building predictive
forecasting models of state policies becoming national policies. Using a large,
longitudinal data set of state level policies and their traits, we train models
to predict (i) whether policies become national policy, and (ii) how many
states must pass a given policy before it becomes national. Using these models
as components, we then develop a logistic growth model to forecast when a
currently spreading state-level policy is likely to pass at the national level.
Our results indicate that traits of initiating states are not systematically
correlated with becoming national policy and they predict neither how many
states must enact a policy before it becomes national nor whether it ultimately
becomes a national law. In contrast, the cumulative number of state-level
adoptions of a policy is reasonably predictive of when a policy becomes
national. For the policies of same sex marriage and methamphetamine precursor
laws, we investigate how well the logistic growth model could forecast the
probable time horizon for true national action. We close with a data-driven
forecast of when marijuana legalization and "stand your ground" laws will
become national policy.
| [
{
"created": "Sun, 21 Oct 2018 16:51:00 GMT",
"version": "v1"
}
] | 2018-10-23 | [
[
"Connor",
"Nora",
""
],
[
"Clauset",
"Aaron",
""
]
] | In the United States, national policies often begin as state laws, which then spread from state to state until they gain momentum to become enacted as a national policy. However, not every state policy reaches the national level. Previous work has suggested that state-level policies are more likely to become national policies depending on their geographic origin, their category of legislation, or some characteristic of their initiating states, such as wealth, urbanicity, or ideological liberalism. Here, we tested these hypotheses by divorcing the set of traits from the states' identities and building predictive forecasting models of state policies becoming national policies. Using a large, longitudinal data set of state level policies and their traits, we train models to predict (i) whether policies become national policy, and (ii) how many states must pass a given policy before it becomes national. Using these models as components, we then develop a logistic growth model to forecast when a currently spreading state-level policy is likely to pass at the national level. Our results indicate that traits of initiating states are not systematically correlated with becoming national policy and they predict neither how many states must enact a policy before it becomes national nor whether it ultimately becomes a national law. In contrast, the cumulative number of state-level adoptions of a policy is reasonably predictive of when a policy becomes national. For the policies of same sex marriage and methamphetamine precursor laws, we investigate how well the logistic growth model could forecast the probable time horizon for true national action. We close with a data-driven forecast of when marijuana legalization and "stand your ground" laws will become national policy. |
cs/0608079 | Anna Gilbert | A. C. Gilbert, M. J. Strauss, J. A. Tropp, and R. Vershynin | Algorithmic linear dimension reduction in the l_1 norm for sparse
vectors | null | null | null | null | cs.DS | null | This paper develops a new method for recovering m-sparse signals that is
simultaneously uniform and quick. We present a reconstruction algorithm whose
run time, O(m log^2(m) log^2(d)), is sublinear in the length d of the signal.
The reconstruction error is within a logarithmic factor (in m) of the optimal
m-term approximation error in l_1. In particular, the algorithm recovers
m-sparse signals perfectly and noisy signals are recovered with polylogarithmic
distortion. Our algorithm makes O(m log^2 (d)) measurements, which is within a
logarithmic factor of optimal. We also present a small-space implementation of
the algorithm. These sketching techniques and the corresponding reconstruction
algorithms provide an algorithmic dimension reduction in the l_1 norm. In
particular, vectors of support m in dimension d can be linearly embedded into
O(m log^2 d) dimensions with polylogarithmic distortion. We can reconstruct a
vector from its low-dimensional sketch in time O(m log^2(m) log^2(d)).
Furthermore, this reconstruction is stable and robust under small
perturbations.
| [
{
"created": "Sat, 19 Aug 2006 01:55:14 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Gilbert",
"A. C.",
""
],
[
"Strauss",
"M. J.",
""
],
[
"Tropp",
"J. A.",
""
],
[
"Vershynin",
"R.",
""
]
] | This paper develops a new method for recovering m-sparse signals that is simultaneously uniform and quick. We present a reconstruction algorithm whose run time, O(m log^2(m) log^2(d)), is sublinear in the length d of the signal. The reconstruction error is within a logarithmic factor (in m) of the optimal m-term approximation error in l_1. In particular, the algorithm recovers m-sparse signals perfectly and noisy signals are recovered with polylogarithmic distortion. Our algorithm makes O(m log^2 (d)) measurements, which is within a logarithmic factor of optimal. We also present a small-space implementation of the algorithm. These sketching techniques and the corresponding reconstruction algorithms provide an algorithmic dimension reduction in the l_1 norm. In particular, vectors of support m in dimension d can be linearly embedded into O(m log^2 d) dimensions with polylogarithmic distortion. We can reconstruct a vector from its low-dimensional sketch in time O(m log^2(m) log^2(d)). Furthermore, this reconstruction is stable and robust under small perturbations. |
1707.08238 | Jieming Mao | Xi Chen, Yuanzhi Li, Jieming Mao | A Nearly Instance Optimal Algorithm for Top-k Ranking under the
Multinomial Logit Model | null | null | null | null | cs.DS stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study the active learning problem of top-$k$ ranking from multi-wise
comparisons under the popular multinomial logit model. Our goal is to identify
the top-$k$ items with high probability by adaptively querying sets for
comparisons and observing the noisy output of the most preferred item from each
comparison. To achieve this goal, we design a new active ranking algorithm
without using any information about the underlying items' preference scores. We
also establish a matching lower bound on the sample complexity even when the
set of preference scores is given to the algorithm. These two results together
show that the proposed algorithm is nearly instance optimal (similar to
instance optimal [FLN03], but up to polylog factors). Our work extends the
existing literature on rank aggregation in three directions. First, instead of
studying a static problem with fixed data, we investigate the top-$k$ ranking
problem in an active learning setting. Second, we show our algorithm is nearly
instance optimal, which is a much stronger theoretical guarantee. Finally, we
extend the pairwise comparison to the multi-wise comparison, which has not been
fully explored in ranking literature.
| [
{
"created": "Tue, 25 Jul 2017 22:03:21 GMT",
"version": "v1"
},
{
"created": "Sun, 30 Jul 2017 14:44:22 GMT",
"version": "v2"
}
] | 2017-08-01 | [
[
"Chen",
"Xi",
""
],
[
"Li",
"Yuanzhi",
""
],
[
"Mao",
"Jieming",
""
]
] | We study the active learning problem of top-$k$ ranking from multi-wise comparisons under the popular multinomial logit model. Our goal is to identify the top-$k$ items with high probability by adaptively querying sets for comparisons and observing the noisy output of the most preferred item from each comparison. To achieve this goal, we design a new active ranking algorithm without using any information about the underlying items' preference scores. We also establish a matching lower bound on the sample complexity even when the set of preference scores is given to the algorithm. These two results together show that the proposed algorithm is nearly instance optimal (similar to instance optimal [FLN03], but up to polylog factors). Our work extends the existing literature on rank aggregation in three directions. First, instead of studying a static problem with fixed data, we investigate the top-$k$ ranking problem in an active learning setting. Second, we show our algorithm is nearly instance optimal, which is a much stronger theoretical guarantee. Finally, we extend the pairwise comparison to the multi-wise comparison, which has not been fully explored in ranking literature. |
1501.02825 | Sven Bambach | Sven Bambach | A Survey on Recent Advances of Computer Vision Algorithms for Egocentric
Video | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent technological advances have made lightweight, head mounted cameras
both practical and affordable and products like Google Glass show first
approaches to introduce the idea of egocentric (first-person) video to the
mainstream. Interestingly, the computer vision community has only recently
started to explore this new domain of egocentric vision, where research can
roughly be categorized into three areas: Object recognition, activity
detection/recognition, video summarization. In this paper, we try to give a
broad overview about the different problems that have been addressed and
collect and compare evaluation results. Moreover, along with the emergence of
this new domain came the introduction of numerous new and versatile benchmark
datasets, which we summarize and compare as well.
| [
{
"created": "Mon, 12 Jan 2015 21:14:56 GMT",
"version": "v1"
}
] | 2015-01-14 | [
[
"Bambach",
"Sven",
""
]
] | Recent technological advances have made lightweight, head mounted cameras both practical and affordable and products like Google Glass show first approaches to introduce the idea of egocentric (first-person) video to the mainstream. Interestingly, the computer vision community has only recently started to explore this new domain of egocentric vision, where research can roughly be categorized into three areas: Object recognition, activity detection/recognition, video summarization. In this paper, we try to give a broad overview about the different problems that have been addressed and collect and compare evaluation results. Moreover, along with the emergence of this new domain came the introduction of numerous new and versatile benchmark datasets, which we summarize and compare as well. |
2404.09263 | Jin Yang | Jin Yang, Ping Wei, Huan Li, Ziyang Ren | Task-Driven Exploration: Decoupling and Inter-Task Feedback for Joint
Moment Retrieval and Highlight Detection | null | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Video moment retrieval and highlight detection are two highly valuable tasks
in video understanding, but until recently they have been jointly studied.
Although existing studies have made impressive advancement recently, they
predominantly follow the data-driven bottom-up paradigm. Such paradigm
overlooks task-specific and inter-task effects, resulting in poor model
performance. In this paper, we propose a novel task-driven top-down framework
TaskWeave for joint moment retrieval and highlight detection. The framework
introduces a task-decoupled unit to capture task-specific and common
representations. To investigate the interplay between the two tasks, we propose
an inter-task feedback mechanism, which transforms the results of one task as
guiding masks to assist the other task. Different from existing methods, we
present a task-dependent joint loss function to optimize the model.
Comprehensive experiments and in-depth ablation studies on QVHighlights, TVSum,
and Charades-STA datasets corroborate the effectiveness and flexibility of the
proposed framework. Codes are available at
https://github.com/EdenGabriel/TaskWeave.
| [
{
"created": "Sun, 14 Apr 2024 14:06:42 GMT",
"version": "v1"
}
] | 2024-05-09 | [
[
"Yang",
"Jin",
""
],
[
"Wei",
"Ping",
""
],
[
"Li",
"Huan",
""
],
[
"Ren",
"Ziyang",
""
]
] | Video moment retrieval and highlight detection are two highly valuable tasks in video understanding, but until recently they have been jointly studied. Although existing studies have made impressive advancement recently, they predominantly follow the data-driven bottom-up paradigm. Such paradigm overlooks task-specific and inter-task effects, resulting in poor model performance. In this paper, we propose a novel task-driven top-down framework TaskWeave for joint moment retrieval and highlight detection. The framework introduces a task-decoupled unit to capture task-specific and common representations. To investigate the interplay between the two tasks, we propose an inter-task feedback mechanism, which transforms the results of one task as guiding masks to assist the other task. Different from existing methods, we present a task-dependent joint loss function to optimize the model. Comprehensive experiments and in-depth ablation studies on QVHighlights, TVSum, and Charades-STA datasets corroborate the effectiveness and flexibility of the proposed framework. Codes are available at https://github.com/EdenGabriel/TaskWeave. |
1402.1296 | Tiago Guerreiro | Ricardo Jo\~ao Silveira Santos Gamboa | Mnemonical Body Shortcuts: Gestural Interface for Mobile Devices | 124 pages, MSc Thesis, Technical University of Lisbon | null | null | null | cs.HC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Mobile devices' user interfaces are still quite similar to traditional
interfaces offered by desktop computers, but those can be highly problematic
when used in a mobile context. Human gesture recognition in mobile interaction
appears as an important area to provide suitable on-the-move usability. We
present a body space based approach to improve mobile device interaction and
mobile performance, which we named as Mnemonical Body Shortcuts. The human body
is presented as a rich repository of meaningful relations which are always
available to interact with. These body-based gestures allow the user to
naturally interact with mobile devices with no movement limitations.
Preliminary studies using Radio Frequency Identification (RFID) technology were
performed, validating Mnemonical Body Shortcuts as an appropriate new mobile
interaction mechanism. Following those studies, we developed inertial sensing
prototypes using an accelerometer, ending in the construction and user testing
of a gestural interface for mobile devices capable of properly recognizing
Mnemonical Body Shortcuts and also providing suitable user control mechanisms
and audio, visual and haptic feedback.
| [
{
"created": "Thu, 6 Feb 2014 09:54:25 GMT",
"version": "v1"
}
] | 2014-02-07 | [
[
"Gamboa",
"Ricardo João Silveira Santos",
""
]
] | Mobile devices' user interfaces are still quite similar to traditional interfaces offered by desktop computers, but those can be highly problematic when used in a mobile context. Human gesture recognition in mobile interaction appears as an important area to provide suitable on-the-move usability. We present a body space based approach to improve mobile device interaction and mobile performance, which we named as Mnemonical Body Shortcuts. The human body is presented as a rich repository of meaningful relations which are always available to interact with. These body-based gestures allow the user to naturally interact with mobile devices with no movement limitations. Preliminary studies using Radio Frequency Identification (RFID) technology were performed, validating Mnemonical Body Shortcuts as an appropriate new mobile interaction mechanism. Following those studies, we developed inertial sensing prototypes using an accelerometer, ending in the construction and user testing of a gestural interface for mobile devices capable of properly recognizing Mnemonical Body Shortcuts and also providing suitable user control mechanisms and audio, visual and haptic feedback. |
1807.06397 | Ronald de Haan | Ronald de Haan | Expressing Linear Orders Requires Exponential-Size DNNFs | null | null | null | null | cs.CC cs.AI cs.LO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We show that any DNNF circuit that expresses the set of linear orders over a
set of $n$ candidates must be of size $2^{\Omega(n)}$. Moreover, we show that
there exist DNNF circuits of size $2^{O(n)}$ expressing linear orders over $n$
candidates.
| [
{
"created": "Tue, 17 Jul 2018 13:02:55 GMT",
"version": "v1"
},
{
"created": "Wed, 18 Jul 2018 15:45:40 GMT",
"version": "v2"
},
{
"created": "Thu, 30 May 2019 12:15:48 GMT",
"version": "v3"
}
] | 2019-05-31 | [
[
"de Haan",
"Ronald",
""
]
] | We show that any DNNF circuit that expresses the set of linear orders over a set of $n$ candidates must be of size $2^{\Omega(n)}$. Moreover, we show that there exist DNNF circuits of size $2^{O(n)}$ expressing linear orders over $n$ candidates. |
2102.07437 | Chengyu Dong | Chengyu Dong, Liyuan Liu, Jingbo Shang | Data Quality Matters For Adversarial Training: An Empirical Study | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multiple intriguing problems are hovering in adversarial training, including
robust overfitting, robustness overestimation, and robustness-accuracy
trade-off. These problems pose great challenges to both reliable evaluation and
practical deployment. Here, we empirically show that these problems share one
common cause -- low-quality samples in the dataset. Specifically, we first
propose a strategy to measure the data quality based on the learning behaviors
of the data during adversarial training and find that low-quality data may not
be useful and even detrimental to the adversarial robustness. We then design
controlled experiments to investigate the interconnections between data quality
and problems in adversarial training. We find that when low-quality data is
removed, robust overfitting and robustness overestimation can be largely
alleviated; and robustness-accuracy trade-off becomes less significant. These
observations not only verify our intuition about data quality but may also open
new opportunities to advance adversarial training.
| [
{
"created": "Mon, 15 Feb 2021 10:17:24 GMT",
"version": "v1"
},
{
"created": "Thu, 10 Jun 2021 22:33:20 GMT",
"version": "v2"
},
{
"created": "Thu, 7 Oct 2021 01:00:51 GMT",
"version": "v3"
}
] | 2021-10-08 | [
[
"Dong",
"Chengyu",
""
],
[
"Liu",
"Liyuan",
""
],
[
"Shang",
"Jingbo",
""
]
] | Multiple intriguing problems are hovering in adversarial training, including robust overfitting, robustness overestimation, and robustness-accuracy trade-off. These problems pose great challenges to both reliable evaluation and practical deployment. Here, we empirically show that these problems share one common cause -- low-quality samples in the dataset. Specifically, we first propose a strategy to measure the data quality based on the learning behaviors of the data during adversarial training and find that low-quality data may not be useful and even detrimental to the adversarial robustness. We then design controlled experiments to investigate the interconnections between data quality and problems in adversarial training. We find that when low-quality data is removed, robust overfitting and robustness overestimation can be largely alleviated; and robustness-accuracy trade-off becomes less significant. These observations not only verify our intuition about data quality but may also open new opportunities to advance adversarial training. |
1303.1119 | Zungeru Adamu Murtala | A.M. Zungeru, L.-M. Ang, K.P. Seng | Termite-hill: From natural to artificial termites in sensor networks | 26 pages, 4 figures | A.M. Zungeru, L.-M. Ang, K.P. Seng. Termite-hill: From Natural to
Artificial Termites in Sensor Networks, International Journal of Swarm
Intelligence Research (IGI-Publishers), vol. 3(4), pp. 1-23, 2012 | 10.4018/jsir.2012100101 | null | cs.NI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Termites present a very good natural metaphor to evolutionary computation.
While each individuals computational power is small compared to more evolved
species, it is the power of their colonies that inspires communication
engineers. This paper presents a study of artificial termites in sensor
networks for the purpose of solving its routing problem. The behaviors of each
of the termites in their colony allow their simulation in a restricted
environment. The simulating behavior demonstrates how the termites make use of
an auto-catalytic behavior in order to collectively find a solution for a posed
problem in reasonable time. The derived algorithm termed Termite-hill
demonstrates the principle of termites behavior to routing problem solving in
the real applications of sensor networks. The performance of the algorithm was
tested on static and dynamic sink scenarios. The results as compared with other
routing algorithms and with varying network density show that Termite-hill is
scalable and improved on network energy consumption with a control over
best-effort-service.
| [
{
"created": "Tue, 5 Mar 2013 17:58:29 GMT",
"version": "v1"
}
] | 2013-03-06 | [
[
"Zungeru",
"A. M.",
""
],
[
"Ang",
"L. -M.",
""
],
[
"Seng",
"K. P.",
""
]
] | Termites present a very good natural metaphor to evolutionary computation. While each individuals computational power is small compared to more evolved species, it is the power of their colonies that inspires communication engineers. This paper presents a study of artificial termites in sensor networks for the purpose of solving its routing problem. The behaviors of each of the termites in their colony allow their simulation in a restricted environment. The simulating behavior demonstrates how the termites make use of an auto-catalytic behavior in order to collectively find a solution for a posed problem in reasonable time. The derived algorithm termed Termite-hill demonstrates the principle of termites behavior to routing problem solving in the real applications of sensor networks. The performance of the algorithm was tested on static and dynamic sink scenarios. The results as compared with other routing algorithms and with varying network density show that Termite-hill is scalable and improved on network energy consumption with a control over best-effort-service. |
1007.3611 | Jaroslaw Byrka | Jaroslaw Byrka and MohammadReza Ghodsi and Aravind Srinivasan | LP-rounding algorithms for facility-location problems | Added funding information | null | null | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study LP-rounding approximation algorithms for metric uncapacitated
facility-location problems. We first give a new analysis for the algorithm of
Chudak and Shmoys, which differs from the analysis of Byrka and Aardal in that
now we do not need any bound based on the solution to the dual LP program.
Besides obtaining the optimal bifactor approximation as do Byrka and Aardal, we
can now also show that the algorithm with scaling parameter equaling 1.58 is,
in fact, an 1.58-approximation algorithm. More importantly, we suggest an
approach based on additional randomization and analyses such as ours, which
could achieve or approach the conjectured optimal 1.46...--approximation for
this basic problem.
Next, using essentially the same techniques, we obtain improved approximation
algorithms in the 2-stage stochastic variant of the problem, where we must open
a subset of facilities having only stochastic information about the future
demand from the clients. For this problem we obtain a 2.2975-approximation
algorithm in the standard setting, and a 2.4957-approximation in the more
restricted, per-scenario setting.
We then study robust fault-tolerant facility location, introduced by Chechik
and Peleg: solutions here are designed to provide low connection cost in case
of failure of up to $k$ facilities. Chechik and Peleg gave a 6.5-approximation
algorithm for $k=1$ and a ($7.5k + 1.5$)-approximation algorithm for general
$k$. We improve this to an LP-rounding $(k+5+4/k)$-approximation algorithm. We
also observe that in case of oblivious failures the expected approximation
ratio can be reduced to $k + 1.5$, and that the integrality gap of the natural
LP-relaxation of the problem is at least $k + 1$.
| [
{
"created": "Wed, 21 Jul 2010 10:48:52 GMT",
"version": "v1"
},
{
"created": "Thu, 8 Mar 2012 13:50:59 GMT",
"version": "v2"
}
] | 2012-03-09 | [
[
"Byrka",
"Jaroslaw",
""
],
[
"Ghodsi",
"MohammadReza",
""
],
[
"Srinivasan",
"Aravind",
""
]
] | We study LP-rounding approximation algorithms for metric uncapacitated facility-location problems. We first give a new analysis for the algorithm of Chudak and Shmoys, which differs from the analysis of Byrka and Aardal in that now we do not need any bound based on the solution to the dual LP program. Besides obtaining the optimal bifactor approximation as do Byrka and Aardal, we can now also show that the algorithm with scaling parameter equaling 1.58 is, in fact, an 1.58-approximation algorithm. More importantly, we suggest an approach based on additional randomization and analyses such as ours, which could achieve or approach the conjectured optimal 1.46...--approximation for this basic problem. Next, using essentially the same techniques, we obtain improved approximation algorithms in the 2-stage stochastic variant of the problem, where we must open a subset of facilities having only stochastic information about the future demand from the clients. For this problem we obtain a 2.2975-approximation algorithm in the standard setting, and a 2.4957-approximation in the more restricted, per-scenario setting. We then study robust fault-tolerant facility location, introduced by Chechik and Peleg: solutions here are designed to provide low connection cost in case of failure of up to $k$ facilities. Chechik and Peleg gave a 6.5-approximation algorithm for $k=1$ and a ($7.5k + 1.5$)-approximation algorithm for general $k$. We improve this to an LP-rounding $(k+5+4/k)$-approximation algorithm. We also observe that in case of oblivious failures the expected approximation ratio can be reduced to $k + 1.5$, and that the integrality gap of the natural LP-relaxation of the problem is at least $k + 1$. |
2110.10966 | Matthew Howe Mr | Matthew Howe, Ian Reid, Jamie Mackenzie | Weakly Supervised Training of Monocular 3D Object Detectors Using Wide
Baseline Multi-view Traffic Camera Data | Paper accepted at The 32nd British Machine Vision Conference, BMVC
2021 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Accurate 7DoF prediction of vehicles at an intersection is an important task
for assessing potential conflicts between road users. In principle, this could
be achieved by a single camera system that is capable of detecting the pose of
each vehicle but this would require a large, accurately labelled dataset from
which to train the detector. Although large vehicle pose datasets exist
(ostensibly developed for autonomous vehicles), we find training on these
datasets inadequate. These datasets contain images from a ground level
viewpoint, whereas an ideal view for intersection observation would be elevated
higher above the road surface. We develop an alternative approach using a
weakly supervised method of fine tuning 3D object detectors for traffic
observation cameras; showing in the process that large existing autonomous
vehicle datasets can be leveraged for pre-training. To fine-tune the monocular
3D object detector, our method utilises multiple 2D detections from
overlapping, wide-baseline views and a loss that encodes the subjacent
geometric consistency. Our method achieves vehicle 7DoF pose prediction
accuracy on our dataset comparable to the top performing monocular 3D object
detectors on autonomous vehicle datasets. We present our training methodology,
multi-view reprojection loss, and dataset.
| [
{
"created": "Thu, 21 Oct 2021 08:26:48 GMT",
"version": "v1"
}
] | 2022-11-30 | [
[
"Howe",
"Matthew",
""
],
[
"Reid",
"Ian",
""
],
[
"Mackenzie",
"Jamie",
""
]
] | Accurate 7DoF prediction of vehicles at an intersection is an important task for assessing potential conflicts between road users. In principle, this could be achieved by a single camera system that is capable of detecting the pose of each vehicle but this would require a large, accurately labelled dataset from which to train the detector. Although large vehicle pose datasets exist (ostensibly developed for autonomous vehicles), we find training on these datasets inadequate. These datasets contain images from a ground level viewpoint, whereas an ideal view for intersection observation would be elevated higher above the road surface. We develop an alternative approach using a weakly supervised method of fine tuning 3D object detectors for traffic observation cameras; showing in the process that large existing autonomous vehicle datasets can be leveraged for pre-training. To fine-tune the monocular 3D object detector, our method utilises multiple 2D detections from overlapping, wide-baseline views and a loss that encodes the subjacent geometric consistency. Our method achieves vehicle 7DoF pose prediction accuracy on our dataset comparable to the top performing monocular 3D object detectors on autonomous vehicle datasets. We present our training methodology, multi-view reprojection loss, and dataset. |
2001.03924 | Ivan Petrenko | Ivan Petrenko | An improvement of the upper bound for GKS communication game | null | null | null | null | cs.CC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The GKS game was formulated by Justin Gilmer, Michal Koucky, and Michael Saks
in their research of the sensitivity conjecture. Mario Szegedy invented a
protocol for the game with the cost of $O(n^{0.4732})$. Then a protocol with
the cost of $O(n^{0.4696})$ was obtained by DeVon Ingram who used a bipartite
matching. We propose a slight improvement of Ingram's method and design a
protocol with cost of $O(n^{0.4693})$.
| [
{
"created": "Sun, 12 Jan 2020 12:50:23 GMT",
"version": "v1"
}
] | 2020-01-14 | [
[
"Petrenko",
"Ivan",
""
]
] | The GKS game was formulated by Justin Gilmer, Michal Koucky, and Michael Saks in their research of the sensitivity conjecture. Mario Szegedy invented a protocol for the game with the cost of $O(n^{0.4732})$. Then a protocol with the cost of $O(n^{0.4696})$ was obtained by DeVon Ingram who used a bipartite matching. We propose a slight improvement of Ingram's method and design a protocol with cost of $O(n^{0.4693})$. |
1705.11077 | Yong Man Ro | Seong Tae Kim, Yong Man Ro | EvaluationNet: Can Human Skill be Evaluated by Deep Networks? | 8 pages, 6 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the recent substantial growth of media such as YouTube, a considerable
number of instructional videos covering a wide variety of tasks are available
online. Therefore, online instructional videos have become a rich resource for
humans to learn everyday skills. In order to improve the effectiveness of the
learning with instructional video, observation and evaluation of the activity
are required. However, it is difficult to observe and evaluate every activity
steps by expert. In this study, a novel deep learning framework which targets
human activity evaluation for learning from instructional video has been
proposed. In order to deal with the inherent variability of activities, we
propose to model activity as a structured process. First, action units are
encoded from dense trajectories with LSTM network. The variable-length action
unit features are then evaluated by a Siamese LSTM network. By the comparative
experiments on public dataset, the effectiveness of the proposed method has
been demonstrated.
| [
{
"created": "Wed, 31 May 2017 13:28:01 GMT",
"version": "v1"
}
] | 2017-06-01 | [
[
"Kim",
"Seong Tae",
""
],
[
"Ro",
"Yong Man",
""
]
] | With the recent substantial growth of media such as YouTube, a considerable number of instructional videos covering a wide variety of tasks are available online. Therefore, online instructional videos have become a rich resource for humans to learn everyday skills. In order to improve the effectiveness of the learning with instructional video, observation and evaluation of the activity are required. However, it is difficult to observe and evaluate every activity steps by expert. In this study, a novel deep learning framework which targets human activity evaluation for learning from instructional video has been proposed. In order to deal with the inherent variability of activities, we propose to model activity as a structured process. First, action units are encoded from dense trajectories with LSTM network. The variable-length action unit features are then evaluated by a Siamese LSTM network. By the comparative experiments on public dataset, the effectiveness of the proposed method has been demonstrated. |
1604.04829 | Raka Jovanovic | Raka Jovanovic, Tatsushi Nishi, Stefan Voss | A heuristic approach for dividing graphs into bi-connected components
with a size constraint | null | null | null | null | cs.DM cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we propose a new problem of finding the maximal bi-connected
partitioning of a graph with a size constraint (MBCPG-SC). With the goal of
finding approximate solutions for the MBCPG-SC, a heuristic method is developed
based on the open ear decomposition of graphs. Its essential part is an
adaptation of the breadth first search which makes it possible to grow
bi-connected subgraphs. The proposed randomized algorithm consists of growing
several subgraphs in parallel. The quality of solutions generated in this way
is further improved using a local search which exploits neighboring relations
between the subgraphs. In order to evaluate the performance of the method, an
algorithm for generating pseudo-random unit disc graphs with known optimal
solutions is created. The conducted computational experiments show that the
proposed method frequently manages to find optimal solutions and has an average
error of only a few percent to known optimal solutions. Further, it manages to
find high quality approximate solutions for graphs having up to 10.000 nodes in
reasonable time.
| [
{
"created": "Sun, 17 Apr 2016 05:19:05 GMT",
"version": "v1"
}
] | 2016-04-19 | [
[
"Jovanovic",
"Raka",
""
],
[
"Nishi",
"Tatsushi",
""
],
[
"Voss",
"Stefan",
""
]
] | In this paper we propose a new problem of finding the maximal bi-connected partitioning of a graph with a size constraint (MBCPG-SC). With the goal of finding approximate solutions for the MBCPG-SC, a heuristic method is developed based on the open ear decomposition of graphs. Its essential part is an adaptation of the breadth first search which makes it possible to grow bi-connected subgraphs. The proposed randomized algorithm consists of growing several subgraphs in parallel. The quality of solutions generated in this way is further improved using a local search which exploits neighboring relations between the subgraphs. In order to evaluate the performance of the method, an algorithm for generating pseudo-random unit disc graphs with known optimal solutions is created. The conducted computational experiments show that the proposed method frequently manages to find optimal solutions and has an average error of only a few percent to known optimal solutions. Further, it manages to find high quality approximate solutions for graphs having up to 10.000 nodes in reasonable time. |
2402.17918 | Amin Sarihi | Amin Sarihi, Ahmad Patooghy, Abdel-Hameed A. Badawy, Peter Jamieson | The Seeker's Dilemma: Realistic Formulation and Benchmarking for
Hardware Trojan Detection | null | null | null | null | cs.CR cs.AR cs.LG | http://creativecommons.org/licenses/by/4.0/ | This work focuses on advancing security research in the hardware design space
by formally defining the realistic problem of Hardware Trojan (HT) detection.
The goal is to model HT detection more closely to the real world, i.e.,
describing the problem as "The Seeker's Dilemma" (an extension of Hide&Seek on
a graph), where a detecting agent is unaware of whether circuits are infected
by HTs or not. Using this theoretical problem formulation, we create a
benchmark that consists of a mixture of HT-free and HT-infected restructured
circuits while preserving their original functionalities. The restructured
circuits are randomly infected by HTs, causing a situation where the defender
is uncertain if a circuit is infected or not. We believe that our innovative
dataset will help the community better judge the detection quality of different
methods by comparing their success rates in circuit classification. We use our
developed benchmark to evaluate three state-of-the-art HT detection tools to
show baseline results for this approach. We use Principal Component Analysis to
assess the strength of our benchmark, where we observe that some restructured
HT-infected circuits are mapped closely to HT-free circuits, leading to
significant label misclassification by detectors.
| [
{
"created": "Tue, 27 Feb 2024 22:14:01 GMT",
"version": "v1"
}
] | 2024-02-29 | [
[
"Sarihi",
"Amin",
""
],
[
"Patooghy",
"Ahmad",
""
],
[
"Badawy",
"Abdel-Hameed A.",
""
],
[
"Jamieson",
"Peter",
""
]
] | This work focuses on advancing security research in the hardware design space by formally defining the realistic problem of Hardware Trojan (HT) detection. The goal is to model HT detection more closely to the real world, i.e., describing the problem as "The Seeker's Dilemma" (an extension of Hide&Seek on a graph), where a detecting agent is unaware of whether circuits are infected by HTs or not. Using this theoretical problem formulation, we create a benchmark that consists of a mixture of HT-free and HT-infected restructured circuits while preserving their original functionalities. The restructured circuits are randomly infected by HTs, causing a situation where the defender is uncertain if a circuit is infected or not. We believe that our innovative dataset will help the community better judge the detection quality of different methods by comparing their success rates in circuit classification. We use our developed benchmark to evaluate three state-of-the-art HT detection tools to show baseline results for this approach. We use Principal Component Analysis to assess the strength of our benchmark, where we observe that some restructured HT-infected circuits are mapped closely to HT-free circuits, leading to significant label misclassification by detectors. |
2211.06929 | Carlos Purves | Carlos Purves, Pietro Li\`o and C\u{a}t\u{a}lina Cangea | Goal-Conditioned Reinforcement Learning in the Presence of an Adversary | null | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Reinforcement learning has seen increasing applications in real-world
contexts over the past few years. However, physical environments are often
imperfect and policies that perform well in simulation might not achieve the
same performance when applied elsewhere. A common approach to combat this is to
train agents in the presence of an adversary. An adversary acts to destabilise
the agent, which learns a more robust policy and can better handle realistic
conditions. Many real-world applications of reinforcement learning also make
use of goal-conditioning: this is particularly useful in the context of
robotics, as it allows the agent to act differently, depending on which goal is
selected. Here, we focus on the problem of goal-conditioned learning in the
presence of an adversary. We first present DigitFlip and CLEVR-Play, two novel
goal-conditioned environments that support acting against an adversary. Next,
we propose EHER and CHER -- two HER-based algorithms for goal-conditioned
learning -- and evaluate their performance. Finally, we unify the two threads
and introduce IGOAL: a novel framework for goal-conditioned learning in the
presence of an adversary. Experimental results show that combining IGOAL with
EHER allows agents to significantly outperform existing approaches, when acting
against both random and competent adversaries.
| [
{
"created": "Sun, 13 Nov 2022 15:40:01 GMT",
"version": "v1"
}
] | 2022-11-15 | [
[
"Purves",
"Carlos",
""
],
[
"Liò",
"Pietro",
""
],
[
"Cangea",
"Cătălina",
""
]
] | Reinforcement learning has seen increasing applications in real-world contexts over the past few years. However, physical environments are often imperfect and policies that perform well in simulation might not achieve the same performance when applied elsewhere. A common approach to combat this is to train agents in the presence of an adversary. An adversary acts to destabilise the agent, which learns a more robust policy and can better handle realistic conditions. Many real-world applications of reinforcement learning also make use of goal-conditioning: this is particularly useful in the context of robotics, as it allows the agent to act differently, depending on which goal is selected. Here, we focus on the problem of goal-conditioned learning in the presence of an adversary. We first present DigitFlip and CLEVR-Play, two novel goal-conditioned environments that support acting against an adversary. Next, we propose EHER and CHER -- two HER-based algorithms for goal-conditioned learning -- and evaluate their performance. Finally, we unify the two threads and introduce IGOAL: a novel framework for goal-conditioned learning in the presence of an adversary. Experimental results show that combining IGOAL with EHER allows agents to significantly outperform existing approaches, when acting against both random and competent adversaries. |
1209.5803 | Boyu Li | Boyu Li and Ender Ayanoglu | Full-Diversity Precoding Design of Bit-Interleaved Coded Multiple
Beamforming with Orthogonal Frequency Division Multiplexing | accepted to journal. arXiv admin note: text overlap with
arXiv:1109.3510 | IEEE TCOM, Vol. 61, No. 6, Pages 2432-2445, Jun. 2013 | 10.1109/TCOMM.2013.041113.120688 | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multi-Input Multi-Output (MIMO) techniques have been incorporated with
Orthogonal Frequency Division Multiplexing (OFDM) for broadband wireless
communication systems. Bit-Interleaved Coded Multiple Beamforming (BICMB) can
achieve both spatial diversity and spatial multiplexing for flat fading MIMO
channels. For frequency selective fading MIMO channels, BICMB with OFDM
(BICMB-OFDM) can be employed to provide both spatial diversity and multipath
diversity, making it an important technique. In our previous work, the
subcarrier grouping technique was applied to combat the negative effect of
subcarrier correlation. It was also proved that full diversity of BICMB-OFDM
with Subcarrier Grouping (BICMB-OFDM-SG) can be achieved within the condition
R_cSL<=1, where R_c, S, and L are the code rate, the number of parallel streams
at each subcarrier, and the number of channel taps, respectively. The full
diversity condition implies that if S increases, R_c may have to decrease to
maintain full diversity. As a result, increasing the number of parallel streams
may not improve the total transmission rate. In this paper, the precoding
technique is employed to overcome the full diversity restriction issue of
R_cSL<=1 for BICMB-OFDM-SG. First, the diversity analysis of precoded
BICMB-OFDM-SG is carried out. Then, the full-diversity precoding design is
developed with the minimum achievable decoding complexity.
| [
{
"created": "Wed, 26 Sep 2012 01:03:58 GMT",
"version": "v1"
},
{
"created": "Sun, 30 Jun 2013 21:37:04 GMT",
"version": "v2"
}
] | 2013-07-02 | [
[
"Li",
"Boyu",
""
],
[
"Ayanoglu",
"Ender",
""
]
] | Multi-Input Multi-Output (MIMO) techniques have been incorporated with Orthogonal Frequency Division Multiplexing (OFDM) for broadband wireless communication systems. Bit-Interleaved Coded Multiple Beamforming (BICMB) can achieve both spatial diversity and spatial multiplexing for flat fading MIMO channels. For frequency selective fading MIMO channels, BICMB with OFDM (BICMB-OFDM) can be employed to provide both spatial diversity and multipath diversity, making it an important technique. In our previous work, the subcarrier grouping technique was applied to combat the negative effect of subcarrier correlation. It was also proved that full diversity of BICMB-OFDM with Subcarrier Grouping (BICMB-OFDM-SG) can be achieved within the condition R_cSL<=1, where R_c, S, and L are the code rate, the number of parallel streams at each subcarrier, and the number of channel taps, respectively. The full diversity condition implies that if S increases, R_c may have to decrease to maintain full diversity. As a result, increasing the number of parallel streams may not improve the total transmission rate. In this paper, the precoding technique is employed to overcome the full diversity restriction issue of R_cSL<=1 for BICMB-OFDM-SG. First, the diversity analysis of precoded BICMB-OFDM-SG is carried out. Then, the full-diversity precoding design is developed with the minimum achievable decoding complexity. |
1407.0699 | Nasr Mohamed | Nasr Mohamed | Enumeration of Spanning Trees Using Edge Exchange with Minimal
Partitioning | Master Thesis | null | null | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this thesis, Minimal Partitioning (MP) algorithm, an innovative algorithm
for enumerating all the spanning trees in an undirected graph is presented.
While MP algorithm uses a computational tree graph to traverse all possible
spanning trees by the edge exchange technique, it has two unique properties
compared to previous algorithms. In the first place, the algorithm maintains a
state of minimal partition size in the spanning tree due to edge deletion. This
is realized by swapping peripheral edges, more precisely leaf edges, in most of
edge exchange operations. Consequently, the main structure of the spanning
trees is preserved during the steps of the enumeration process. This extra
constraint proves to be advantageous in many applications where the partition
size is a factor in the solution cost. Secondly, we introduce, and utilize, the
new concept of edge promotion: the exchanged edges always share one end.
Practically, and as a result of this property, the interface between the two
partitions of the spanning tree during edge exchange has to be maintained from
one side only.
For a graph $G(V,E)$, MP algorithm requires $O(log V+E/V)$ expected time and
$OV log V)$ worst case time for generating each spanning tree. MP algorithm
requires a total expected space limit of $O(E log V)$ with worst case limit of
$O(EV)$. Like all edge exchange algorithms, MP algorithm retains the advantage
of compacted output of $O(1)$ per spanning tree by listing the relative
differences only.
Three sample real-world applications of spanning trees enumeration are
explored and the effects of using MP algorithm are studied. Namely:
construction of nets of polyhedra, multi-robots spanning tree routing, and
computing the electric current in edges of a network. We report that MP
algorithm outperforms other algorithm by $O(V)$ time complexity.
| [
{
"created": "Wed, 2 Jul 2014 17:15:28 GMT",
"version": "v1"
}
] | 2014-07-04 | [
[
"Mohamed",
"Nasr",
""
]
] | In this thesis, Minimal Partitioning (MP) algorithm, an innovative algorithm for enumerating all the spanning trees in an undirected graph is presented. While MP algorithm uses a computational tree graph to traverse all possible spanning trees by the edge exchange technique, it has two unique properties compared to previous algorithms. In the first place, the algorithm maintains a state of minimal partition size in the spanning tree due to edge deletion. This is realized by swapping peripheral edges, more precisely leaf edges, in most of edge exchange operations. Consequently, the main structure of the spanning trees is preserved during the steps of the enumeration process. This extra constraint proves to be advantageous in many applications where the partition size is a factor in the solution cost. Secondly, we introduce, and utilize, the new concept of edge promotion: the exchanged edges always share one end. Practically, and as a result of this property, the interface between the two partitions of the spanning tree during edge exchange has to be maintained from one side only. For a graph $G(V,E)$, MP algorithm requires $O(log V+E/V)$ expected time and $OV log V)$ worst case time for generating each spanning tree. MP algorithm requires a total expected space limit of $O(E log V)$ with worst case limit of $O(EV)$. Like all edge exchange algorithms, MP algorithm retains the advantage of compacted output of $O(1)$ per spanning tree by listing the relative differences only. Three sample real-world applications of spanning trees enumeration are explored and the effects of using MP algorithm are studied. Namely: construction of nets of polyhedra, multi-robots spanning tree routing, and computing the electric current in edges of a network. We report that MP algorithm outperforms other algorithm by $O(V)$ time complexity. |
2208.07591 | Subhankar Roy | Subhankar Roy, Martin Trapp, Andrea Pilzer, Juho Kannala, Nicu Sebe,
Elisa Ricci, Arno Solin | Uncertainty-guided Source-free Domain Adaptation | ECCV 2022 | null | null | null | cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | Source-free domain adaptation (SFDA) aims to adapt a classifier to an
unlabelled target data set by only using a pre-trained source model. However,
the absence of the source data and the domain shift makes the predictions on
the target data unreliable. We propose quantifying the uncertainty in the
source model predictions and utilizing it to guide the target adaptation. For
this, we construct a probabilistic source model by incorporating priors on the
network parameters inducing a distribution over the model predictions.
Uncertainties are estimated by employing a Laplace approximation and
incorporated to identify target data points that do not lie in the source
manifold and to down-weight them when maximizing the mutual information on the
target data. Unlike recent works, our probabilistic treatment is
computationally lightweight, decouples source training and target adaptation,
and requires no specialized source training or changes of the model
architecture. We show the advantages of uncertainty-guided SFDA over
traditional SFDA in the closed-set and open-set settings and provide empirical
evidence that our approach is more robust to strong domain shifts even without
tuning.
| [
{
"created": "Tue, 16 Aug 2022 08:03:30 GMT",
"version": "v1"
}
] | 2022-08-17 | [
[
"Roy",
"Subhankar",
""
],
[
"Trapp",
"Martin",
""
],
[
"Pilzer",
"Andrea",
""
],
[
"Kannala",
"Juho",
""
],
[
"Sebe",
"Nicu",
""
],
[
"Ricci",
"Elisa",
""
],
[
"Solin",
"Arno",
""
]
] | Source-free domain adaptation (SFDA) aims to adapt a classifier to an unlabelled target data set by only using a pre-trained source model. However, the absence of the source data and the domain shift makes the predictions on the target data unreliable. We propose quantifying the uncertainty in the source model predictions and utilizing it to guide the target adaptation. For this, we construct a probabilistic source model by incorporating priors on the network parameters inducing a distribution over the model predictions. Uncertainties are estimated by employing a Laplace approximation and incorporated to identify target data points that do not lie in the source manifold and to down-weight them when maximizing the mutual information on the target data. Unlike recent works, our probabilistic treatment is computationally lightweight, decouples source training and target adaptation, and requires no specialized source training or changes of the model architecture. We show the advantages of uncertainty-guided SFDA over traditional SFDA in the closed-set and open-set settings and provide empirical evidence that our approach is more robust to strong domain shifts even without tuning. |
1301.1704 | Qi Hu | Qi Hu, Nail A. Gumerov, Ramani Duraiswami | Parallel Algorithms for Constructing Data Structures for Fast Multipole
Methods | null | null | null | null | cs.MS cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present efficient algorithms to build data structures and the lists needed
for fast multipole methods. The algorithms are capable of being efficiently
implemented on both serial, data parallel GPU and on distributed architectures.
With these algorithms it is possible to map the FMM efficiently on to the GPU
or distributed heterogeneous CPU-GPU systems. Further, in dynamic problems, as
the distribution of the particles change, the reduced cost of building the data
structures improves performance. Using these algorithms, we demonstrate example
high fidelity simulations with large problem sizes by using FMM on both single
and multiple heterogeneous computing facilities equipped with multi-core CPU
and many-core GPUs.
| [
{
"created": "Tue, 8 Jan 2013 21:57:20 GMT",
"version": "v1"
}
] | 2013-01-10 | [
[
"Hu",
"Qi",
""
],
[
"Gumerov",
"Nail A.",
""
],
[
"Duraiswami",
"Ramani",
""
]
] | We present efficient algorithms to build data structures and the lists needed for fast multipole methods. The algorithms are capable of being efficiently implemented on both serial, data parallel GPU and on distributed architectures. With these algorithms it is possible to map the FMM efficiently on to the GPU or distributed heterogeneous CPU-GPU systems. Further, in dynamic problems, as the distribution of the particles change, the reduced cost of building the data structures improves performance. Using these algorithms, we demonstrate example high fidelity simulations with large problem sizes by using FMM on both single and multiple heterogeneous computing facilities equipped with multi-core CPU and many-core GPUs. |
2307.05973 | Wenlong Huang | Wenlong Huang, Chen Wang, Ruohan Zhang, Yunzhu Li, Jiajun Wu, Li
Fei-Fei | VoxPoser: Composable 3D Value Maps for Robotic Manipulation with
Language Models | null | null | null | null | cs.RO cs.AI cs.CL cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large language models (LLMs) are shown to possess a wealth of actionable
knowledge that can be extracted for robot manipulation in the form of reasoning
and planning. Despite the progress, most still rely on pre-defined motion
primitives to carry out the physical interactions with the environment, which
remains a major bottleneck. In this work, we aim to synthesize robot
trajectories, i.e., a dense sequence of 6-DoF end-effector waypoints, for a
large variety of manipulation tasks given an open-set of instructions and an
open-set of objects. We achieve this by first observing that LLMs excel at
inferring affordances and constraints given a free-form language instruction.
More importantly, by leveraging their code-writing capabilities, they can
interact with a vision-language model (VLM) to compose 3D value maps to ground
the knowledge into the observation space of the agent. The composed value maps
are then used in a model-based planning framework to zero-shot synthesize
closed-loop robot trajectories with robustness to dynamic perturbations. We
further demonstrate how the proposed framework can benefit from online
experiences by efficiently learning a dynamics model for scenes that involve
contact-rich interactions. We present a large-scale study of the proposed
method in both simulated and real-robot environments, showcasing the ability to
perform a large variety of everyday manipulation tasks specified in free-form
natural language. Videos and code at https://voxposer.github.io
| [
{
"created": "Wed, 12 Jul 2023 07:40:48 GMT",
"version": "v1"
},
{
"created": "Thu, 2 Nov 2023 06:53:37 GMT",
"version": "v2"
}
] | 2023-11-03 | [
[
"Huang",
"Wenlong",
""
],
[
"Wang",
"Chen",
""
],
[
"Zhang",
"Ruohan",
""
],
[
"Li",
"Yunzhu",
""
],
[
"Wu",
"Jiajun",
""
],
[
"Fei-Fei",
"Li",
""
]
] | Large language models (LLMs) are shown to possess a wealth of actionable knowledge that can be extracted for robot manipulation in the form of reasoning and planning. Despite the progress, most still rely on pre-defined motion primitives to carry out the physical interactions with the environment, which remains a major bottleneck. In this work, we aim to synthesize robot trajectories, i.e., a dense sequence of 6-DoF end-effector waypoints, for a large variety of manipulation tasks given an open-set of instructions and an open-set of objects. We achieve this by first observing that LLMs excel at inferring affordances and constraints given a free-form language instruction. More importantly, by leveraging their code-writing capabilities, they can interact with a vision-language model (VLM) to compose 3D value maps to ground the knowledge into the observation space of the agent. The composed value maps are then used in a model-based planning framework to zero-shot synthesize closed-loop robot trajectories with robustness to dynamic perturbations. We further demonstrate how the proposed framework can benefit from online experiences by efficiently learning a dynamics model for scenes that involve contact-rich interactions. We present a large-scale study of the proposed method in both simulated and real-robot environments, showcasing the ability to perform a large variety of everyday manipulation tasks specified in free-form natural language. Videos and code at https://voxposer.github.io |
2105.09050 | Jia-Chen Gu | Jia-Chen Gu, Hui Liu, Zhen-Hua Ling, Quan Liu, Zhigang Chen, Xiaodan
Zhu | Partner Matters! An Empirical Study on Fusing Personas for Personalized
Response Selection in Retrieval-Based Chatbots | Accepted by SIGIR 2021 | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Persona can function as the prior knowledge for maintaining the consistency
of dialogue systems. Most of previous studies adopted the self persona in
dialogue whose response was about to be selected from a set of candidates or
directly generated, but few have noticed the role of partner in dialogue. This
paper makes an attempt to thoroughly explore the impact of utilizing personas
that describe either self or partner speakers on the task of response selection
in retrieval-based chatbots. Four persona fusion strategies are designed, which
assume personas interact with contexts or responses in different ways. These
strategies are implemented into three representative models for response
selection, which are based on the Hierarchical Recurrent Encoder (HRE),
Interactive Matching Network (IMN) and Bidirectional Encoder Representations
from Transformers (BERT) respectively. Empirical studies on the Persona-Chat
dataset show that the partner personas neglected in previous studies can
improve the accuracy of response selection in the IMN- and BERT-based models.
Besides, our BERT-based model implemented with the context-response-aware
persona fusion strategy outperforms previous methods by margins larger than
2.7% on original personas and 4.6% on revised personas in terms of hits@1
(top-1 accuracy), achieving a new state-of-the-art performance on the
Persona-Chat dataset.
| [
{
"created": "Wed, 19 May 2021 10:32:30 GMT",
"version": "v1"
},
{
"created": "Fri, 21 May 2021 02:43:50 GMT",
"version": "v2"
}
] | 2021-05-24 | [
[
"Gu",
"Jia-Chen",
""
],
[
"Liu",
"Hui",
""
],
[
"Ling",
"Zhen-Hua",
""
],
[
"Liu",
"Quan",
""
],
[
"Chen",
"Zhigang",
""
],
[
"Zhu",
"Xiaodan",
""
]
] | Persona can function as the prior knowledge for maintaining the consistency of dialogue systems. Most of previous studies adopted the self persona in dialogue whose response was about to be selected from a set of candidates or directly generated, but few have noticed the role of partner in dialogue. This paper makes an attempt to thoroughly explore the impact of utilizing personas that describe either self or partner speakers on the task of response selection in retrieval-based chatbots. Four persona fusion strategies are designed, which assume personas interact with contexts or responses in different ways. These strategies are implemented into three representative models for response selection, which are based on the Hierarchical Recurrent Encoder (HRE), Interactive Matching Network (IMN) and Bidirectional Encoder Representations from Transformers (BERT) respectively. Empirical studies on the Persona-Chat dataset show that the partner personas neglected in previous studies can improve the accuracy of response selection in the IMN- and BERT-based models. Besides, our BERT-based model implemented with the context-response-aware persona fusion strategy outperforms previous methods by margins larger than 2.7% on original personas and 4.6% on revised personas in terms of hits@1 (top-1 accuracy), achieving a new state-of-the-art performance on the Persona-Chat dataset. |
2204.13243 | Sharon Levy | Kai Nakamura, Sharon Levy, Yi-Lin Tuan, Wenhu Chen, William Yang Wang | HybriDialogue: An Information-Seeking Dialogue Dataset Grounded on
Tabular and Textual Data | Findings of ACL 2022 | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A pressing challenge in current dialogue systems is to successfully converse
with users on topics with information distributed across different modalities.
Previous work in multiturn dialogue systems has primarily focused on either
text or table information. In more realistic scenarios, having a joint
understanding of both is critical as knowledge is typically distributed over
both unstructured and structured forms. We present a new dialogue dataset,
HybriDialogue, which consists of crowdsourced natural conversations grounded on
both Wikipedia text and tables. The conversations are created through the
decomposition of complex multihop questions into simple, realistic multiturn
dialogue interactions. We propose retrieval, system state tracking, and
dialogue response generation tasks for our dataset and conduct baseline
experiments for each. Our results show that there is still ample opportunity
for improvement, demonstrating the importance of building stronger dialogue
systems that can reason over the complex setting of information-seeking
dialogue grounded on tables and text.
| [
{
"created": "Thu, 28 Apr 2022 00:52:16 GMT",
"version": "v1"
}
] | 2022-04-29 | [
[
"Nakamura",
"Kai",
""
],
[
"Levy",
"Sharon",
""
],
[
"Tuan",
"Yi-Lin",
""
],
[
"Chen",
"Wenhu",
""
],
[
"Wang",
"William Yang",
""
]
] | A pressing challenge in current dialogue systems is to successfully converse with users on topics with information distributed across different modalities. Previous work in multiturn dialogue systems has primarily focused on either text or table information. In more realistic scenarios, having a joint understanding of both is critical as knowledge is typically distributed over both unstructured and structured forms. We present a new dialogue dataset, HybriDialogue, which consists of crowdsourced natural conversations grounded on both Wikipedia text and tables. The conversations are created through the decomposition of complex multihop questions into simple, realistic multiturn dialogue interactions. We propose retrieval, system state tracking, and dialogue response generation tasks for our dataset and conduct baseline experiments for each. Our results show that there is still ample opportunity for improvement, demonstrating the importance of building stronger dialogue systems that can reason over the complex setting of information-seeking dialogue grounded on tables and text. |
2203.00833 | Tsingsong Zhao | Qingsong Zhao, Yi Wang, Shuguang Dou, Chen Gong, Yin Wang, Cairong
Zhao | Adaptive Discriminative Regularization for Visual Classification | We submit it to a Journal this time | null | null | null | cs.LG cs.CV | http://creativecommons.org/licenses/by/4.0/ | How to improve discriminative feature learning is central in classification.
Existing works address this problem by explicitly increasing inter-class
separability and intra-class similarity, whether by constructing positive and
negative pairs for contrastive learning or posing tighter class separating
margins. These methods do not exploit the similarity between different classes
as they adhere to i.i.d. assumption in data. In this paper, we embrace the
real-world data distribution setting that some classes share semantic overlaps
due to their similar appearances or concepts. Regarding this hypothesis, we
propose a novel regularization to improve discriminative learning. We first
calibrate the estimated highest likelihood of one sample based on its
semantically neighboring classes, then encourage the overall likelihood
predictions to be deterministic by imposing an adaptive exponential penalty. As
the gradient of the proposed method is roughly proportional to the uncertainty
of the predicted likelihoods, we name it adaptive discriminative regularization
(ADR), trained along with a standard cross entropy loss in classification.
Extensive experiments demonstrate that it can yield consistent and non-trivial
performance improvements in a variety of visual classification tasks (over 10
benchmarks). Furthermore, we find it is robust to long-tailed and noisy label
data distribution. Its flexible design enables its compatibility with
mainstream classification architectures and losses.
| [
{
"created": "Wed, 2 Mar 2022 02:52:23 GMT",
"version": "v1"
},
{
"created": "Tue, 3 Jan 2023 09:53:02 GMT",
"version": "v2"
},
{
"created": "Wed, 11 Jan 2023 14:06:36 GMT",
"version": "v3"
}
] | 2023-01-12 | [
[
"Zhao",
"Qingsong",
""
],
[
"Wang",
"Yi",
""
],
[
"Dou",
"Shuguang",
""
],
[
"Gong",
"Chen",
""
],
[
"Wang",
"Yin",
""
],
[
"Zhao",
"Cairong",
""
]
] | How to improve discriminative feature learning is central in classification. Existing works address this problem by explicitly increasing inter-class separability and intra-class similarity, whether by constructing positive and negative pairs for contrastive learning or posing tighter class separating margins. These methods do not exploit the similarity between different classes as they adhere to i.i.d. assumption in data. In this paper, we embrace the real-world data distribution setting that some classes share semantic overlaps due to their similar appearances or concepts. Regarding this hypothesis, we propose a novel regularization to improve discriminative learning. We first calibrate the estimated highest likelihood of one sample based on its semantically neighboring classes, then encourage the overall likelihood predictions to be deterministic by imposing an adaptive exponential penalty. As the gradient of the proposed method is roughly proportional to the uncertainty of the predicted likelihoods, we name it adaptive discriminative regularization (ADR), trained along with a standard cross entropy loss in classification. Extensive experiments demonstrate that it can yield consistent and non-trivial performance improvements in a variety of visual classification tasks (over 10 benchmarks). Furthermore, we find it is robust to long-tailed and noisy label data distribution. Its flexible design enables its compatibility with mainstream classification architectures and losses. |
2407.09966 | Mei Qiu | Mei Qiu, Lauren Ann Christopher, Lingxi Li, Stanley Chien, Yaobin Chen | Optimizing ROI Benefits Vehicle ReID in ITS | null | null | null | null | cs.CV eess.IV | http://creativecommons.org/licenses/by-sa/4.0/ | Vehicle re-identification (ReID) is a computer vision task that matches the
same vehicle across different cameras or viewpoints in a surveillance system.
This is crucial for Intelligent Transportation Systems (ITS), where the
effectiveness is influenced by the regions from which vehicle images are
cropped. This study explores whether optimal vehicle detection regions, guided
by detection confidence scores, can enhance feature matching and ReID tasks.
Using our framework with multiple Regions of Interest (ROIs) and lane-wise
vehicle counts, we employed YOLOv8 for detection and DeepSORT for tracking
across twelve Indiana Highway videos, including two pairs of videos from
non-overlapping cameras. Tracked vehicle images were cropped from inside and
outside the ROIs at five-frame intervals. Features were extracted using
pre-trained models: ResNet50, ResNeXt50, Vision Transformer, and
Swin-Transformer. Feature consistency was assessed through cosine similarity,
information entropy, and clustering variance. Results showed that features from
images cropped inside ROIs had higher mean cosine similarity values compared to
those involving one image inside and one outside the ROIs. The most significant
difference was observed during night conditions (0.7842 inside vs. 0.5 outside
the ROI with Swin-Transformer) and in cross-camera scenarios (0.75
inside-inside vs. 0.52 inside-outside the ROI with Vision Transformer).
Information entropy and clustering variance further supported that features in
ROIs are more consistent. These findings suggest that strategically selected
ROIs can enhance tracking performance and ReID accuracy in ITS.
| [
{
"created": "Sat, 13 Jul 2024 18:15:06 GMT",
"version": "v1"
}
] | 2024-07-16 | [
[
"Qiu",
"Mei",
""
],
[
"Christopher",
"Lauren Ann",
""
],
[
"Li",
"Lingxi",
""
],
[
"Chien",
"Stanley",
""
],
[
"Chen",
"Yaobin",
""
]
] | Vehicle re-identification (ReID) is a computer vision task that matches the same vehicle across different cameras or viewpoints in a surveillance system. This is crucial for Intelligent Transportation Systems (ITS), where the effectiveness is influenced by the regions from which vehicle images are cropped. This study explores whether optimal vehicle detection regions, guided by detection confidence scores, can enhance feature matching and ReID tasks. Using our framework with multiple Regions of Interest (ROIs) and lane-wise vehicle counts, we employed YOLOv8 for detection and DeepSORT for tracking across twelve Indiana Highway videos, including two pairs of videos from non-overlapping cameras. Tracked vehicle images were cropped from inside and outside the ROIs at five-frame intervals. Features were extracted using pre-trained models: ResNet50, ResNeXt50, Vision Transformer, and Swin-Transformer. Feature consistency was assessed through cosine similarity, information entropy, and clustering variance. Results showed that features from images cropped inside ROIs had higher mean cosine similarity values compared to those involving one image inside and one outside the ROIs. The most significant difference was observed during night conditions (0.7842 inside vs. 0.5 outside the ROI with Swin-Transformer) and in cross-camera scenarios (0.75 inside-inside vs. 0.52 inside-outside the ROI with Vision Transformer). Information entropy and clustering variance further supported that features in ROIs are more consistent. These findings suggest that strategically selected ROIs can enhance tracking performance and ReID accuracy in ITS. |
cs/0607038 | Benoit Donnet | Benoit Donnet, Bruno Baynat, Timur Friedman | Retouched Bloom Filters: Allowing Networked Applications to Flexibly
Trade Off False Positives Against False Negatives | This is a new version of the technical reports with improved
algorithms and theorical analysis of algorithms | null | null | null | cs.NI | null | Where distributed agents must share voluminous set membership information,
Bloom filters provide a compact, though lossy, way for them to do so. Numerous
recent networking papers have examined the trade-offs between the bandwidth
consumed by the transmission of Bloom filters, and the error rate, which takes
the form of false positives, and which rises the more the filters are
compressed. In this paper, we introduce the retouched Bloom filter (RBF), an
extension that makes the Bloom filter more flexible by permitting the removal
of selected false positives at the expense of generating random false
negatives. We analytically show that RBFs created through a random process
maintain an overall error rate, expressed as a combination of the false
positive rate and the false negative rate, that is equal to the false positive
rate of the corresponding Bloom filters. We further provide some simple
heuristics and improved algorithms that decrease the false positive rate more
than than the corresponding increase in the false negative rate, when creating
RBFs. Finally, we demonstrate the advantages of an RBF over a Bloom filter in a
distributed network topology measurement application, where information about
large stop sets must be shared among route tracing monitors.
| [
{
"created": "Sun, 9 Jul 2006 10:40:26 GMT",
"version": "v1"
},
{
"created": "Fri, 1 Dec 2006 13:52:04 GMT",
"version": "v2"
}
] | 2007-05-23 | [
[
"Donnet",
"Benoit",
""
],
[
"Baynat",
"Bruno",
""
],
[
"Friedman",
"Timur",
""
]
] | Where distributed agents must share voluminous set membership information, Bloom filters provide a compact, though lossy, way for them to do so. Numerous recent networking papers have examined the trade-offs between the bandwidth consumed by the transmission of Bloom filters, and the error rate, which takes the form of false positives, and which rises the more the filters are compressed. In this paper, we introduce the retouched Bloom filter (RBF), an extension that makes the Bloom filter more flexible by permitting the removal of selected false positives at the expense of generating random false negatives. We analytically show that RBFs created through a random process maintain an overall error rate, expressed as a combination of the false positive rate and the false negative rate, that is equal to the false positive rate of the corresponding Bloom filters. We further provide some simple heuristics and improved algorithms that decrease the false positive rate more than than the corresponding increase in the false negative rate, when creating RBFs. Finally, we demonstrate the advantages of an RBF over a Bloom filter in a distributed network topology measurement application, where information about large stop sets must be shared among route tracing monitors. |
2003.09120 | Piotr Kulicki | Xin Sun, Piotr Kulicki, Mirek Sopek | Multi-party Quantum Byzantine Agreement Without Entanglement | 6 pages, 1 figure | null | 10.3390/e22101152 | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we propose a protocol of quantum communication to achieve
Byzantine agreement among multiple parties. The striking feature of our
proposal in comparison to the existing protocols is that we do not use
entanglement to achieve the agreement. There are two stages in our protocol. In
the first stage, a list of numbers that satisfies some special properties is
distributed to every participant by a group of semi-honest list distributors
via quantum secure communication. Then, in the second stage those participants
exchange some information to reach agreement.
| [
{
"created": "Fri, 20 Mar 2020 06:26:44 GMT",
"version": "v1"
}
] | 2020-12-02 | [
[
"Sun",
"Xin",
""
],
[
"Kulicki",
"Piotr",
""
],
[
"Sopek",
"Mirek",
""
]
] | In this paper we propose a protocol of quantum communication to achieve Byzantine agreement among multiple parties. The striking feature of our proposal in comparison to the existing protocols is that we do not use entanglement to achieve the agreement. There are two stages in our protocol. In the first stage, a list of numbers that satisfies some special properties is distributed to every participant by a group of semi-honest list distributors via quantum secure communication. Then, in the second stage those participants exchange some information to reach agreement. |
2108.05595 | Thorben Werner | Thorben Werner | Reinforcement Learning Approach to Active Learning for Image
Classification | Master Thesis | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Machine Learning requires large amounts of labeled data to fit a model. Many
datasets are already publicly available, nevertheless forcing application
possibilities of machine learning to the domains of those public datasets. The
ever-growing penetration of machine learning algorithms in new application
areas requires solutions for the need for data in those new domains. This
thesis works on active learning as one possible solution to reduce the amount
of data that needs to be processed by hand, by processing only those datapoints
that specifically benefit the training of a strong model for the task. A newly
proposed framework for framing the active learning workflow as a reinforcement
learning problem is adapted for image classification and a series of three
experiments is conducted. Each experiment is evaluated and potential issues
with the approach are outlined. Each following experiment then proposes
improvements to the framework and evaluates their impact. After the last
experiment, a final conclusion is drawn, unfortunately rejecting this work's
hypothesis and outlining that the proposed framework at the moment is not
capable of improving active learning for image classification with a trained
reinforcement learning agent.
| [
{
"created": "Thu, 12 Aug 2021 08:34:02 GMT",
"version": "v1"
}
] | 2021-08-13 | [
[
"Werner",
"Thorben",
""
]
] | Machine Learning requires large amounts of labeled data to fit a model. Many datasets are already publicly available, nevertheless forcing application possibilities of machine learning to the domains of those public datasets. The ever-growing penetration of machine learning algorithms in new application areas requires solutions for the need for data in those new domains. This thesis works on active learning as one possible solution to reduce the amount of data that needs to be processed by hand, by processing only those datapoints that specifically benefit the training of a strong model for the task. A newly proposed framework for framing the active learning workflow as a reinforcement learning problem is adapted for image classification and a series of three experiments is conducted. Each experiment is evaluated and potential issues with the approach are outlined. Each following experiment then proposes improvements to the framework and evaluates their impact. After the last experiment, a final conclusion is drawn, unfortunately rejecting this work's hypothesis and outlining that the proposed framework at the moment is not capable of improving active learning for image classification with a trained reinforcement learning agent. |
1903.09463 | Thorsten Wissmann | Robert Furber and Radu Mardare and Matteo Mio | Probabilistic logics based on Riesz spaces | null | Logical Methods in Computer Science, Volume 16, Issue 1 (January
27, 2020) lmcs:5306 | 10.23638/LMCS-16(1:6)2020 | null | cs.LO | http://creativecommons.org/licenses/by/4.0/ | We introduce a novel real-valued endogenous logic for expressing properties
of probabilistic transition systems called Riesz modal logic. The design of the
syntax and semantics of this logic is directly inspired by the theory of Riesz
spaces, a mature field of mathematics at the intersection of universal algebra
and functional analysis. By using powerful results from this theory, we develop
the duality theory of the Riesz modal logic in the form of an
algebra-to-coalgebra correspondence. This has a number of consequences
including: a sound and complete axiomatization, the proof that the logic
characterizes probabilistic bisimulation and other convenient results such as
completion theorems. This work is intended to be the basis for subsequent
research on extensions of Riesz modal logic with fixed-point operators.
| [
{
"created": "Fri, 22 Mar 2019 12:02:21 GMT",
"version": "v1"
},
{
"created": "Fri, 27 Dec 2019 12:13:19 GMT",
"version": "v2"
},
{
"created": "Fri, 24 Jan 2020 12:31:27 GMT",
"version": "v3"
}
] | 2023-06-22 | [
[
"Furber",
"Robert",
""
],
[
"Mardare",
"Radu",
""
],
[
"Mio",
"Matteo",
""
]
] | We introduce a novel real-valued endogenous logic for expressing properties of probabilistic transition systems called Riesz modal logic. The design of the syntax and semantics of this logic is directly inspired by the theory of Riesz spaces, a mature field of mathematics at the intersection of universal algebra and functional analysis. By using powerful results from this theory, we develop the duality theory of the Riesz modal logic in the form of an algebra-to-coalgebra correspondence. This has a number of consequences including: a sound and complete axiomatization, the proof that the logic characterizes probabilistic bisimulation and other convenient results such as completion theorems. This work is intended to be the basis for subsequent research on extensions of Riesz modal logic with fixed-point operators. |
1704.08388 | Ted Pedersen | Ted Pedersen | Duluth at Semeval-2017 Task 7 : Puns upon a midnight dreary, Lexical
Semantics for the weak and weary | 5 pages, to Appear in the Proceedings of the 11th International
Workshop on Semantic Evaluation (SemEval 2017), August 2017, Vancouver, BC | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper describes the Duluth systems that participated in SemEval-2017
Task 7 : Detection and Interpretation of English Puns. The Duluth systems
participated in all three subtasks, and relied on methods that included word
sense disambiguation and measures of semantic relatedness.
| [
{
"created": "Thu, 27 Apr 2017 00:29:17 GMT",
"version": "v1"
},
{
"created": "Fri, 28 Apr 2017 01:16:07 GMT",
"version": "v2"
}
] | 2017-05-01 | [
[
"Pedersen",
"Ted",
""
]
] | This paper describes the Duluth systems that participated in SemEval-2017 Task 7 : Detection and Interpretation of English Puns. The Duluth systems participated in all three subtasks, and relied on methods that included word sense disambiguation and measures of semantic relatedness. |
2310.04703 | Chung-Soo Ahn | Chung-Soo Ahn, Jagath C. Rajapakse and Rajib Rana | Integrating Contrastive Learning into a Multitask Transformer Model for
Effective Domain Adaptation | null | null | null | null | cs.CL cs.HC cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | While speech emotion recognition (SER) research has made significant
progress, achieving generalization across various corpora continues to pose a
problem. We propose a novel domain adaptation technique that embodies a
multitask framework with SER as the primary task, and contrastive learning and
information maximisation loss as auxiliary tasks, underpinned by fine-tuning of
transformers pre-trained on large language models. Empirical results obtained
through experiments on well-established datasets like IEMOCAP and MSP-IMPROV,
illustrate that our proposed model achieves state-of-the-art performance in SER
within cross-corpus scenarios.
| [
{
"created": "Sat, 7 Oct 2023 06:41:29 GMT",
"version": "v1"
}
] | 2023-10-10 | [
[
"Ahn",
"Chung-Soo",
""
],
[
"Rajapakse",
"Jagath C.",
""
],
[
"Rana",
"Rajib",
""
]
] | While speech emotion recognition (SER) research has made significant progress, achieving generalization across various corpora continues to pose a problem. We propose a novel domain adaptation technique that embodies a multitask framework with SER as the primary task, and contrastive learning and information maximisation loss as auxiliary tasks, underpinned by fine-tuning of transformers pre-trained on large language models. Empirical results obtained through experiments on well-established datasets like IEMOCAP and MSP-IMPROV, illustrate that our proposed model achieves state-of-the-art performance in SER within cross-corpus scenarios. |
2011.13594 | Andreas Buchberger | Andreas Buchberger, Christian H\"ager, Henry D. Pfister, Laurent
Schmalen, Alexandre Graell i Amat | Pruning and Quantizing Neural Belief Propagation Decoders | Accepted for publication in IEEE Journal on Selected Areas in
Communications (J-SAC) | null | null | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider near maximum-likelihood (ML) decoding of short linear block
codes. In particular, we propose a novel decoding approach based on neural
belief propagation (NBP) decoding recently introduced by Nachmani et al. in
which we allow a different parity-check matrix in each iteration of the
algorithm. The key idea is to consider NBP decoding over an overcomplete
parity-check matrix and use the weights of NBP as a measure of the importance
of the check nodes (CNs) to decoding. The unimportant CNs are then pruned. In
contrast to NBP, which performs decoding on a given fixed parity-check matrix,
the proposed pruning-based neural belief propagation (PB-NBP) typically results
in a different parity-check matrix in each iteration. For a given complexity in
terms of CN evaluations, we show that PB-NBP yields significant performance
improvements with respect to NBP. We apply the proposed decoder to the decoding
of a Reed-Muller code, a short low-density parity-check (LDPC) code, and a
polar code. PB-NBP outperforms NBP decoding over an overcomplete parity-check
matrix by 0.27-0.31 dB while reducing the number of required CN evaluations by
up to 97%. For the LDPC code, PB-NBP outperforms conventional belief
propagation with the same number of CN evaluations by 0.52 dB. We further
extend the pruning concept to offset min-sum decoding and introduce a
pruning-based neural offset min-sum (PB-NOMS) decoder, for which we jointly
optimize the offsets and the quantization of the messages and offsets. We
demonstrate performance 0.5 dB from ML decoding with 5-bit quantization for the
Reed-Muller code.
| [
{
"created": "Fri, 27 Nov 2020 07:50:31 GMT",
"version": "v1"
}
] | 2020-11-30 | [
[
"Buchberger",
"Andreas",
""
],
[
"Häger",
"Christian",
""
],
[
"Pfister",
"Henry D.",
""
],
[
"Schmalen",
"Laurent",
""
],
[
"Amat",
"Alexandre Graell i",
""
]
] | We consider near maximum-likelihood (ML) decoding of short linear block codes. In particular, we propose a novel decoding approach based on neural belief propagation (NBP) decoding recently introduced by Nachmani et al. in which we allow a different parity-check matrix in each iteration of the algorithm. The key idea is to consider NBP decoding over an overcomplete parity-check matrix and use the weights of NBP as a measure of the importance of the check nodes (CNs) to decoding. The unimportant CNs are then pruned. In contrast to NBP, which performs decoding on a given fixed parity-check matrix, the proposed pruning-based neural belief propagation (PB-NBP) typically results in a different parity-check matrix in each iteration. For a given complexity in terms of CN evaluations, we show that PB-NBP yields significant performance improvements with respect to NBP. We apply the proposed decoder to the decoding of a Reed-Muller code, a short low-density parity-check (LDPC) code, and a polar code. PB-NBP outperforms NBP decoding over an overcomplete parity-check matrix by 0.27-0.31 dB while reducing the number of required CN evaluations by up to 97%. For the LDPC code, PB-NBP outperforms conventional belief propagation with the same number of CN evaluations by 0.52 dB. We further extend the pruning concept to offset min-sum decoding and introduce a pruning-based neural offset min-sum (PB-NOMS) decoder, for which we jointly optimize the offsets and the quantization of the messages and offsets. We demonstrate performance 0.5 dB from ML decoding with 5-bit quantization for the Reed-Muller code. |
1702.03257 | Sandip Chakraborty | Raja Karmakar and Sandip Chakraborty and Samiran Chattopadhyay | Impact of IEEE 802.11n/ac PHY/MAC High Throughput Enhancements over
Transport/Application Layer Protocols - A Survey | null | null | null | null | cs.NI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Since the inception of Wireless Local Area Networks (WLANs) in the year 1997,
it has tremendously grown in the last few years. IEEE 802.11 is popularly known
as WLAN. To provide the last mile wireless broadband connectivity to users,
IEEE 802.11 is enriched with IEEE 802.11a, IEEE 802.11b and IEEE 802.11g. More
recently, IEEE 802.11n, IEEE 802.11ac and IEEE 802.11ad are introduced with
enhancements to the physical (PHY) layer and medium access control (MAC)
sublayer to provide much higher data rates and thus these amendments are called
High Throughput WLANs (HT-WLANs). For both standards, PHY is enhanced with
multiple-input multiple-output (MIMO) antenna technologies, channel bonding,
short guard intervals (SGI), enhanced modulation and coding schemes (MCS). At
the same time, MAC layer overhead is reduced by introducing frame aggregation
and block acknowledgement technologies. However, existing studies reveal that
although PHY and MAC enhancements promise to improve physical data rate
significantly, they yield negative impact over upper layer protocols -- mainly
for reliable end-to-end transport/application layer protocols. As a
consequence, a large number of schools have focused researches on HT-WLANs to
improve the coordination among PHY/MAC and upper layer protocols and thus,
boost up the performance benefit. In this survey, we discuss the impact of
enhancements of PHY/MAC layer in HT-WLANs over transport/application layer
protocols. list down different open challenges that can be explored for the
development of next generation HT-WLAN technologies.
| [
{
"created": "Fri, 10 Feb 2017 17:25:48 GMT",
"version": "v1"
}
] | 2017-02-13 | [
[
"Karmakar",
"Raja",
""
],
[
"Chakraborty",
"Sandip",
""
],
[
"Chattopadhyay",
"Samiran",
""
]
] | Since the inception of Wireless Local Area Networks (WLANs) in the year 1997, it has tremendously grown in the last few years. IEEE 802.11 is popularly known as WLAN. To provide the last mile wireless broadband connectivity to users, IEEE 802.11 is enriched with IEEE 802.11a, IEEE 802.11b and IEEE 802.11g. More recently, IEEE 802.11n, IEEE 802.11ac and IEEE 802.11ad are introduced with enhancements to the physical (PHY) layer and medium access control (MAC) sublayer to provide much higher data rates and thus these amendments are called High Throughput WLANs (HT-WLANs). For both standards, PHY is enhanced with multiple-input multiple-output (MIMO) antenna technologies, channel bonding, short guard intervals (SGI), enhanced modulation and coding schemes (MCS). At the same time, MAC layer overhead is reduced by introducing frame aggregation and block acknowledgement technologies. However, existing studies reveal that although PHY and MAC enhancements promise to improve physical data rate significantly, they yield negative impact over upper layer protocols -- mainly for reliable end-to-end transport/application layer protocols. As a consequence, a large number of schools have focused researches on HT-WLANs to improve the coordination among PHY/MAC and upper layer protocols and thus, boost up the performance benefit. In this survey, we discuss the impact of enhancements of PHY/MAC layer in HT-WLANs over transport/application layer protocols. list down different open challenges that can be explored for the development of next generation HT-WLAN technologies. |
2103.03170 | Ruixu Liu | Ruixu Liu, Ju Shen, He Wang, Chen Chen, Sen-ching Cheung, Vijayan K.
Asari | Enhanced 3D Human Pose Estimation from Videos by using Attention-Based
Neural Network with Dilated Convolutions | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | The attention mechanism provides a sequential prediction framework for
learning spatial models with enhanced implicit temporal consistency. In this
work, we show a systematic design (from 2D to 3D) for how conventional networks
and other forms of constraints can be incorporated into the attention framework
for learning long-range dependencies for the task of pose estimation. The
contribution of this paper is to provide a systematic approach for designing
and training of attention-based models for the end-to-end pose estimation, with
the flexibility and scalability of arbitrary video sequences as input. We
achieve this by adapting temporal receptive field via a multi-scale structure
of dilated convolutions. Besides, the proposed architecture can be easily
adapted to a causal model enabling real-time performance. Any off-the-shelf 2D
pose estimation systems, e.g. Mocap libraries, can be easily integrated in an
ad-hoc fashion. Our method achieves the state-of-the-art performance and
outperforms existing methods by reducing the mean per joint position error to
33.4 mm on Human3.6M dataset.
| [
{
"created": "Thu, 4 Mar 2021 17:26:51 GMT",
"version": "v1"
}
] | 2021-03-05 | [
[
"Liu",
"Ruixu",
""
],
[
"Shen",
"Ju",
""
],
[
"Wang",
"He",
""
],
[
"Chen",
"Chen",
""
],
[
"Cheung",
"Sen-ching",
""
],
[
"Asari",
"Vijayan K.",
""
]
] | The attention mechanism provides a sequential prediction framework for learning spatial models with enhanced implicit temporal consistency. In this work, we show a systematic design (from 2D to 3D) for how conventional networks and other forms of constraints can be incorporated into the attention framework for learning long-range dependencies for the task of pose estimation. The contribution of this paper is to provide a systematic approach for designing and training of attention-based models for the end-to-end pose estimation, with the flexibility and scalability of arbitrary video sequences as input. We achieve this by adapting temporal receptive field via a multi-scale structure of dilated convolutions. Besides, the proposed architecture can be easily adapted to a causal model enabling real-time performance. Any off-the-shelf 2D pose estimation systems, e.g. Mocap libraries, can be easily integrated in an ad-hoc fashion. Our method achieves the state-of-the-art performance and outperforms existing methods by reducing the mean per joint position error to 33.4 mm on Human3.6M dataset. |
2401.05334 | Zhaoxi Chen | Zhaoxi Chen, Gyeongsik Moon, Kaiwen Guo, Chen Cao, Stanislav
Pidhorskyi, Tomas Simon, Rohan Joshi, Yuan Dong, Yichen Xu, Bernardo Pires,
He Wen, Lucas Evans, Bo Peng, Julia Buffalini, Autumn Trimble, Kevyn McPhail,
Melissa Schoeller, Shoou-I Yu, Javier Romero, Michael Zollh\"ofer, Yaser
Sheikh, Ziwei Liu, Shunsuke Saito | URHand: Universal Relightable Hands | Project Page https://frozenburning.github.io/projects/urhand/ | null | null | null | cs.CV cs.GR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Existing photorealistic relightable hand models require extensive
identity-specific observations in different views, poses, and illuminations,
and face challenges in generalizing to natural illuminations and novel
identities. To bridge this gap, we present URHand, the first universal
relightable hand model that generalizes across viewpoints, poses,
illuminations, and identities. Our model allows few-shot personalization using
images captured with a mobile phone, and is ready to be photorealistically
rendered under novel illuminations. To simplify the personalization process
while retaining photorealism, we build a powerful universal relightable prior
based on neural relighting from multi-view images of hands captured in a light
stage with hundreds of identities. The key challenge is scaling the
cross-identity training while maintaining personalized fidelity and sharp
details without compromising generalization under natural illuminations. To
this end, we propose a spatially varying linear lighting model as the neural
renderer that takes physics-inspired shading as input feature. By removing
non-linear activations and bias, our specifically designed lighting model
explicitly keeps the linearity of light transport. This enables single-stage
training from light-stage data while generalizing to real-time rendering under
arbitrary continuous illuminations across diverse identities. In addition, we
introduce the joint learning of a physically based model and our neural
relighting model, which further improves fidelity and generalization. Extensive
experiments show that our approach achieves superior performance over existing
methods in terms of both quality and generalizability. We also demonstrate
quick personalization of URHand from a short phone scan of an unseen identity.
| [
{
"created": "Wed, 10 Jan 2024 18:59:51 GMT",
"version": "v1"
}
] | 2024-01-11 | [
[
"Chen",
"Zhaoxi",
""
],
[
"Moon",
"Gyeongsik",
""
],
[
"Guo",
"Kaiwen",
""
],
[
"Cao",
"Chen",
""
],
[
"Pidhorskyi",
"Stanislav",
""
],
[
"Simon",
"Tomas",
""
],
[
"Joshi",
"Rohan",
""
],
[
"Dong",
"Yuan",
""
],
[
"Xu",
"Yichen",
""
],
[
"Pires",
"Bernardo",
""
],
[
"Wen",
"He",
""
],
[
"Evans",
"Lucas",
""
],
[
"Peng",
"Bo",
""
],
[
"Buffalini",
"Julia",
""
],
[
"Trimble",
"Autumn",
""
],
[
"McPhail",
"Kevyn",
""
],
[
"Schoeller",
"Melissa",
""
],
[
"Yu",
"Shoou-I",
""
],
[
"Romero",
"Javier",
""
],
[
"Zollhöfer",
"Michael",
""
],
[
"Sheikh",
"Yaser",
""
],
[
"Liu",
"Ziwei",
""
],
[
"Saito",
"Shunsuke",
""
]
] | Existing photorealistic relightable hand models require extensive identity-specific observations in different views, poses, and illuminations, and face challenges in generalizing to natural illuminations and novel identities. To bridge this gap, we present URHand, the first universal relightable hand model that generalizes across viewpoints, poses, illuminations, and identities. Our model allows few-shot personalization using images captured with a mobile phone, and is ready to be photorealistically rendered under novel illuminations. To simplify the personalization process while retaining photorealism, we build a powerful universal relightable prior based on neural relighting from multi-view images of hands captured in a light stage with hundreds of identities. The key challenge is scaling the cross-identity training while maintaining personalized fidelity and sharp details without compromising generalization under natural illuminations. To this end, we propose a spatially varying linear lighting model as the neural renderer that takes physics-inspired shading as input feature. By removing non-linear activations and bias, our specifically designed lighting model explicitly keeps the linearity of light transport. This enables single-stage training from light-stage data while generalizing to real-time rendering under arbitrary continuous illuminations across diverse identities. In addition, we introduce the joint learning of a physically based model and our neural relighting model, which further improves fidelity and generalization. Extensive experiments show that our approach achieves superior performance over existing methods in terms of both quality and generalizability. We also demonstrate quick personalization of URHand from a short phone scan of an unseen identity. |
2209.12755 | Zhengchun Zhou | Zhifan Ye, Zhengchun Zhou, Zilong Liu, Xiaohu Tang, Pingzhi Fan | New Spectrally Constrained Sequence Sets with Optimal {Periodic}
Cross-Correlation | to appear in IEEE Transactions on Information Theory | null | null | null | cs.IT math.CO math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Spectrally constrained sequences (SCSs) play an important role in modern
communication and radar systems operating over non-contiguous spectrum. Despite
numerous research attempts over the past years, very few works are known on the
constructions of optimal SCSs with low cross-correlations. In this paper, we
address such a major problem by introducing a unifying framework to construct
unimodular SCS families using circular Florentine rectangles (CFRs) and
interleaving techniques. By leveraging the uniform power allocation in the
frequency domain for all the admissible carriers (a necessary condition for
beating the existing periodic correlation lower bound of SCSs), we present a
tighter correlation lower bound and show that it is achievable by our proposed
SCS families including multiple SCS sets with zero correlation zone properties.
| [
{
"created": "Mon, 26 Sep 2022 15:03:32 GMT",
"version": "v1"
}
] | 2022-09-27 | [
[
"Ye",
"Zhifan",
""
],
[
"Zhou",
"Zhengchun",
""
],
[
"Liu",
"Zilong",
""
],
[
"Tang",
"Xiaohu",
""
],
[
"Fan",
"Pingzhi",
""
]
] | Spectrally constrained sequences (SCSs) play an important role in modern communication and radar systems operating over non-contiguous spectrum. Despite numerous research attempts over the past years, very few works are known on the constructions of optimal SCSs with low cross-correlations. In this paper, we address such a major problem by introducing a unifying framework to construct unimodular SCS families using circular Florentine rectangles (CFRs) and interleaving techniques. By leveraging the uniform power allocation in the frequency domain for all the admissible carriers (a necessary condition for beating the existing periodic correlation lower bound of SCSs), we present a tighter correlation lower bound and show that it is achievable by our proposed SCS families including multiple SCS sets with zero correlation zone properties. |
2103.10699 | Aleksandr Petiushko | Maksim Dzabraev, Maksim Kalashnikov, Stepan Komkov, Aleksandr
Petiushko | MDMMT: Multidomain Multimodal Transformer for Video Retrieval | null | CVPR Workshops 2021: 3354-3363 | 10.1109/CVPRW53098.2021.00374 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a new state-of-the-art on the text to video retrieval task on
MSRVTT and LSMDC benchmarks where our model outperforms all previous solutions
by a large margin. Moreover, state-of-the-art results are achieved with a
single model on two datasets without finetuning. This multidomain
generalisation is achieved by a proper combination of different video caption
datasets. We show that training on different datasets can improve test results
of each other. Additionally we check intersection between many popular datasets
and found that MSRVTT has a significant overlap between the test and the train
parts, and the same situation is observed for ActivityNet.
| [
{
"created": "Fri, 19 Mar 2021 09:16:39 GMT",
"version": "v1"
}
] | 2021-11-09 | [
[
"Dzabraev",
"Maksim",
""
],
[
"Kalashnikov",
"Maksim",
""
],
[
"Komkov",
"Stepan",
""
],
[
"Petiushko",
"Aleksandr",
""
]
] | We present a new state-of-the-art on the text to video retrieval task on MSRVTT and LSMDC benchmarks where our model outperforms all previous solutions by a large margin. Moreover, state-of-the-art results are achieved with a single model on two datasets without finetuning. This multidomain generalisation is achieved by a proper combination of different video caption datasets. We show that training on different datasets can improve test results of each other. Additionally we check intersection between many popular datasets and found that MSRVTT has a significant overlap between the test and the train parts, and the same situation is observed for ActivityNet. |
2101.02305 | Maryam Motamedi | Maryam Motamedi, Jessica Dawson, Na Li, Douglas G. Down, and Nancy M.
Heddle | Demand Forecasting for Platelet Usage: from Univariate Time Series to
Multivariate Models | null | null | null | pone.0297391 | cs.LG stat.AP stat.ML | http://creativecommons.org/licenses/by/4.0/ | Platelet products are both expensive and have very short shelf lives. As
usage rates for platelets are highly variable, the effective management of
platelet demand and supply is very important yet challenging. The primary goal
of this paper is to present an efficient forecasting model for platelet demand
at Canadian Blood Services (CBS). To accomplish this goal, four different
demand forecasting methods, ARIMA (Auto Regressive Moving Average), Prophet,
lasso regression (least absolute shrinkage and selection operator) and LSTM
(Long Short-Term Memory) networks are utilized and evaluated. We use a large
clinical dataset for a centralized blood distribution centre for four hospitals
in Hamilton, Ontario, spanning from 2010 to 2018 and consisting of daily
platelet transfusions along with information such as the product
specifications, the recipients' characteristics, and the recipients' laboratory
test results. This study is the first to utilize different methods from
statistical time series models to data-driven regression and a machine learning
technique for platelet transfusion using clinical predictors and with different
amounts of data. We find that the multivariate approaches have the highest
accuracy in general, however, if sufficient data are available, a simpler time
series approach such as ARIMA appears to be sufficient. We also comment on the
approach to choose clinical indicators (inputs) for the multivariate models.
| [
{
"created": "Wed, 6 Jan 2021 23:54:10 GMT",
"version": "v1"
},
{
"created": "Fri, 23 Dec 2022 22:18:47 GMT",
"version": "v2"
}
] | 2024-04-30 | [
[
"Motamedi",
"Maryam",
""
],
[
"Dawson",
"Jessica",
""
],
[
"Li",
"Na",
""
],
[
"Down",
"Douglas G.",
""
],
[
"Heddle",
"Nancy M.",
""
]
] | Platelet products are both expensive and have very short shelf lives. As usage rates for platelets are highly variable, the effective management of platelet demand and supply is very important yet challenging. The primary goal of this paper is to present an efficient forecasting model for platelet demand at Canadian Blood Services (CBS). To accomplish this goal, four different demand forecasting methods, ARIMA (Auto Regressive Moving Average), Prophet, lasso regression (least absolute shrinkage and selection operator) and LSTM (Long Short-Term Memory) networks are utilized and evaluated. We use a large clinical dataset for a centralized blood distribution centre for four hospitals in Hamilton, Ontario, spanning from 2010 to 2018 and consisting of daily platelet transfusions along with information such as the product specifications, the recipients' characteristics, and the recipients' laboratory test results. This study is the first to utilize different methods from statistical time series models to data-driven regression and a machine learning technique for platelet transfusion using clinical predictors and with different amounts of data. We find that the multivariate approaches have the highest accuracy in general, however, if sufficient data are available, a simpler time series approach such as ARIMA appears to be sufficient. We also comment on the approach to choose clinical indicators (inputs) for the multivariate models. |
2104.01782 | Ishani Mondal | Ishani Mondal | BBAEG: Towards BERT-based Biomedical Adversarial Example Generation for
Text Classification | To appear in NAACL 2021 | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Healthcare predictive analytics aids medical decision-making, diagnosis
prediction and drug review analysis. Therefore, prediction accuracy is an
important criteria which also necessitates robust predictive language models.
However, the models using deep learning have been proven vulnerable towards
insignificantly perturbed input instances which are less likely to be
misclassified by humans. Recent efforts of generating adversaries using
rule-based synonyms and BERT-MLMs have been witnessed in general domain, but
the ever increasing biomedical literature poses unique challenges. We propose
BBAEG (Biomedical BERT-based Adversarial Example Generation), a black-box
attack algorithm for biomedical text classification, leveraging the strengths
of both domain-specific synonym replacement for biomedical named entities and
BERTMLM predictions, spelling variation and number replacement. Through
automatic and human evaluation on two datasets, we demonstrate that BBAEG
performs stronger attack with better language fluency, semantic coherence as
compared to prior work.
| [
{
"created": "Mon, 5 Apr 2021 05:32:56 GMT",
"version": "v1"
}
] | 2021-04-06 | [
[
"Mondal",
"Ishani",
""
]
] | Healthcare predictive analytics aids medical decision-making, diagnosis prediction and drug review analysis. Therefore, prediction accuracy is an important criteria which also necessitates robust predictive language models. However, the models using deep learning have been proven vulnerable towards insignificantly perturbed input instances which are less likely to be misclassified by humans. Recent efforts of generating adversaries using rule-based synonyms and BERT-MLMs have been witnessed in general domain, but the ever increasing biomedical literature poses unique challenges. We propose BBAEG (Biomedical BERT-based Adversarial Example Generation), a black-box attack algorithm for biomedical text classification, leveraging the strengths of both domain-specific synonym replacement for biomedical named entities and BERTMLM predictions, spelling variation and number replacement. Through automatic and human evaluation on two datasets, we demonstrate that BBAEG performs stronger attack with better language fluency, semantic coherence as compared to prior work. |
2303.07564 | Hanyu Zhou | Hanyu Zhou, Yi Chang, Wending Yan, Luxin Yan | Unsupervised Cumulative Domain Adaptation for Foggy Scene Optical Flow | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Optical flow has achieved great success under clean scenes, but suffers from
restricted performance under foggy scenes. To bridge the clean-to-foggy domain
gap, the existing methods typically adopt the domain adaptation to transfer the
motion knowledge from clean to synthetic foggy domain. However, these methods
unexpectedly neglect the synthetic-to-real domain gap, and thus are erroneous
when applied to real-world scenes. To handle the practical optical flow under
real foggy scenes, in this work, we propose a novel unsupervised cumulative
domain adaptation optical flow (UCDA-Flow) framework: depth-association motion
adaptation and correlation-alignment motion adaptation. Specifically, we
discover that depth is a key ingredient to influence the optical flow: the
deeper depth, the inferior optical flow, which motivates us to design a
depth-association motion adaptation module to bridge the clean-to-foggy domain
gap. Moreover, we figure out that the cost volume correlation shares similar
distribution of the synthetic and real foggy images, which enlightens us to
devise a correlation-alignment motion adaptation module to distill motion
knowledge of the synthetic foggy domain to the real foggy domain. Note that
synthetic fog is designed as the intermediate domain. Under this unified
framework, the proposed cumulative adaptation progressively transfers knowledge
from clean scenes to real foggy scenes. Extensive experiments have been
performed to verify the superiority of the proposed method.
| [
{
"created": "Tue, 14 Mar 2023 01:10:59 GMT",
"version": "v1"
},
{
"created": "Mon, 20 Mar 2023 13:28:36 GMT",
"version": "v2"
}
] | 2023-03-21 | [
[
"Zhou",
"Hanyu",
""
],
[
"Chang",
"Yi",
""
],
[
"Yan",
"Wending",
""
],
[
"Yan",
"Luxin",
""
]
] | Optical flow has achieved great success under clean scenes, but suffers from restricted performance under foggy scenes. To bridge the clean-to-foggy domain gap, the existing methods typically adopt the domain adaptation to transfer the motion knowledge from clean to synthetic foggy domain. However, these methods unexpectedly neglect the synthetic-to-real domain gap, and thus are erroneous when applied to real-world scenes. To handle the practical optical flow under real foggy scenes, in this work, we propose a novel unsupervised cumulative domain adaptation optical flow (UCDA-Flow) framework: depth-association motion adaptation and correlation-alignment motion adaptation. Specifically, we discover that depth is a key ingredient to influence the optical flow: the deeper depth, the inferior optical flow, which motivates us to design a depth-association motion adaptation module to bridge the clean-to-foggy domain gap. Moreover, we figure out that the cost volume correlation shares similar distribution of the synthetic and real foggy images, which enlightens us to devise a correlation-alignment motion adaptation module to distill motion knowledge of the synthetic foggy domain to the real foggy domain. Note that synthetic fog is designed as the intermediate domain. Under this unified framework, the proposed cumulative adaptation progressively transfers knowledge from clean scenes to real foggy scenes. Extensive experiments have been performed to verify the superiority of the proposed method. |
1809.06972 | David Yoon | David J. Yoon, Tim Y. Tang and Timothy D. Barfoot | Mapless Online Detection of Dynamic Objects in 3D Lidar | 7 pages, 8 figures | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents a model-free, setting-independent method for online
detection of dynamic objects in 3D lidar data. We explicitly compensate for the
moving-while-scanning operation (motion distortion) of present-day 3D spinning
lidar sensors. Our detection method uses a motion-compensated freespace
querying algorithm and classifies between dynamic (currently moving) and static
(currently stationary) labels at the point level. For a quantitative analysis,
we establish a benchmark with motion-distorted lidar data using CARLA, an
open-source simulator for autonomous driving research. We also provide a
qualitative analysis with real data using a Velodyne HDL-64E in driving
scenarios. Compared to existing 3D lidar methods that are model-free, our
method is unique because of its setting independence and compensation for
pointcloud motion distortion.
| [
{
"created": "Wed, 19 Sep 2018 00:49:25 GMT",
"version": "v1"
}
] | 2018-09-20 | [
[
"Yoon",
"David J.",
""
],
[
"Tang",
"Tim Y.",
""
],
[
"Barfoot",
"Timothy D.",
""
]
] | This paper presents a model-free, setting-independent method for online detection of dynamic objects in 3D lidar data. We explicitly compensate for the moving-while-scanning operation (motion distortion) of present-day 3D spinning lidar sensors. Our detection method uses a motion-compensated freespace querying algorithm and classifies between dynamic (currently moving) and static (currently stationary) labels at the point level. For a quantitative analysis, we establish a benchmark with motion-distorted lidar data using CARLA, an open-source simulator for autonomous driving research. We also provide a qualitative analysis with real data using a Velodyne HDL-64E in driving scenarios. Compared to existing 3D lidar methods that are model-free, our method is unique because of its setting independence and compensation for pointcloud motion distortion. |
2408.05649 | Blessing Agyei Kyem | Blessing Agyei Kyem, Eugene Kofi Okrah Denteh, Joshua Kofi Asamoah,
Kenneth Adomako Tutu, Armstrong Aboah | Advancing Pavement Distress Detection in Developing Countries: A Novel
Deep Learning Approach with Locally-Collected Datasets | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Road infrastructure maintenance in developing countries faces unique
challenges due to resource constraints and diverse environmental factors. This
study addresses the critical need for efficient, accurate, and locally-relevant
pavement distress detection methods in these regions. We present a novel deep
learning approach combining YOLO (You Only Look Once) object detection models
with a Convolutional Block Attention Module (CBAM) to simultaneously detect and
classify multiple pavement distress types. The model demonstrates robust
performance in detecting and classifying potholes, longitudinal cracks,
alligator cracks, and raveling, with confidence scores ranging from 0.46 to
0.93. While some misclassifications occur in complex scenarios, these provide
insights into unique challenges of pavement assessment in developing countries.
Additionally, we developed a web-based application for real-time distress
detection from images and videos. This research advances automated pavement
distress detection and provides a tailored solution for developing countries,
potentially improving road safety, optimizing maintenance strategies, and
contributing to sustainable transportation infrastructure development.
| [
{
"created": "Sat, 10 Aug 2024 23:20:36 GMT",
"version": "v1"
}
] | 2024-08-13 | [
[
"Kyem",
"Blessing Agyei",
""
],
[
"Denteh",
"Eugene Kofi Okrah",
""
],
[
"Asamoah",
"Joshua Kofi",
""
],
[
"Tutu",
"Kenneth Adomako",
""
],
[
"Aboah",
"Armstrong",
""
]
] | Road infrastructure maintenance in developing countries faces unique challenges due to resource constraints and diverse environmental factors. This study addresses the critical need for efficient, accurate, and locally-relevant pavement distress detection methods in these regions. We present a novel deep learning approach combining YOLO (You Only Look Once) object detection models with a Convolutional Block Attention Module (CBAM) to simultaneously detect and classify multiple pavement distress types. The model demonstrates robust performance in detecting and classifying potholes, longitudinal cracks, alligator cracks, and raveling, with confidence scores ranging from 0.46 to 0.93. While some misclassifications occur in complex scenarios, these provide insights into unique challenges of pavement assessment in developing countries. Additionally, we developed a web-based application for real-time distress detection from images and videos. This research advances automated pavement distress detection and provides a tailored solution for developing countries, potentially improving road safety, optimizing maintenance strategies, and contributing to sustainable transportation infrastructure development. |
1805.01890 | Kamran Kowsari | Kamran Kowsari, Mojtaba Heidarysafa, Donald E. Brown, Kiana Jafari
Meimandi, Laura E. Barnes | RMDL: Random Multimodel Deep Learning for Classification | Best Paper award ACM ICISDM | null | 10.1145/3206098.3206111 | null | cs.LG cs.AI cs.CV cs.NE stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The continually increasing number of complex datasets each year necessitates
ever improving machine learning methods for robust and accurate categorization
of these data. This paper introduces Random Multimodel Deep Learning (RMDL): a
new ensemble, deep learning approach for classification. Deep learning models
have achieved state-of-the-art results across many domains. RMDL solves the
problem of finding the best deep learning structure and architecture while
simultaneously improving robustness and accuracy through ensembles of deep
learning architectures. RDML can accept as input a variety data to include
text, video, images, and symbolic. This paper describes RMDL and shows test
results for image and text data including MNIST, CIFAR-10, WOS, Reuters, IMDB,
and 20newsgroup. These test results show that RDML produces consistently better
performance than standard methods over a broad range of data types and
classification problems.
| [
{
"created": "Thu, 3 May 2018 19:36:43 GMT",
"version": "v1"
},
{
"created": "Thu, 31 May 2018 16:08:33 GMT",
"version": "v2"
}
] | 2018-06-01 | [
[
"Kowsari",
"Kamran",
""
],
[
"Heidarysafa",
"Mojtaba",
""
],
[
"Brown",
"Donald E.",
""
],
[
"Meimandi",
"Kiana Jafari",
""
],
[
"Barnes",
"Laura E.",
""
]
] | The continually increasing number of complex datasets each year necessitates ever improving machine learning methods for robust and accurate categorization of these data. This paper introduces Random Multimodel Deep Learning (RMDL): a new ensemble, deep learning approach for classification. Deep learning models have achieved state-of-the-art results across many domains. RMDL solves the problem of finding the best deep learning structure and architecture while simultaneously improving robustness and accuracy through ensembles of deep learning architectures. RDML can accept as input a variety data to include text, video, images, and symbolic. This paper describes RMDL and shows test results for image and text data including MNIST, CIFAR-10, WOS, Reuters, IMDB, and 20newsgroup. These test results show that RDML produces consistently better performance than standard methods over a broad range of data types and classification problems. |
2108.02585 | Linda Kleist | Mikkel Abrahamsen, Linda Kleist, Tillmann Miltzow | Geometric Embeddability of Complexes is $\exists \mathbb R$-complete | 26 pages, 18 figures | null | null | null | cs.CC cs.CG cs.DM math.CO math.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We show that the decision problem of determining whether a given (abstract
simplicial) $k$-complex has a geometric embedding in $\mathbb R^d$ is complete
for the Existential Theory of the Reals for all $d\geq 3$ and $k\in\{d-1,d\}$.
This implies that the problem is polynomial time equivalent to determining
whether a polynomial equation system has a real solution. Moreover, this
implies NP-hardness and constitutes the first hardness results for the
algorithmic problem of geometric embedding (abstract simplicial) complexes.
| [
{
"created": "Thu, 5 Aug 2021 12:48:06 GMT",
"version": "v1"
},
{
"created": "Fri, 5 Nov 2021 11:12:44 GMT",
"version": "v2"
}
] | 2021-11-08 | [
[
"Abrahamsen",
"Mikkel",
""
],
[
"Kleist",
"Linda",
""
],
[
"Miltzow",
"Tillmann",
""
]
] | We show that the decision problem of determining whether a given (abstract simplicial) $k$-complex has a geometric embedding in $\mathbb R^d$ is complete for the Existential Theory of the Reals for all $d\geq 3$ and $k\in\{d-1,d\}$. This implies that the problem is polynomial time equivalent to determining whether a polynomial equation system has a real solution. Moreover, this implies NP-hardness and constitutes the first hardness results for the algorithmic problem of geometric embedding (abstract simplicial) complexes. |
2402.09232 | Cristian Urbina | Gonzalo Navarro and Cristian Urbina | Iterated Straight-Line Programs | This version of the article includes the proofs omitted from LATIN24 | null | null | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We explore an extension to straight-line programs (SLPs) that outperforms,
for some text families, the measure $\delta$ based on substring complexity, a
lower bound for most measures and compressors exploiting repetitiveness (which
are crucial in areas like Bioinformatics). The extension, called iterated SLPs
(ISLPs), allows rules of the form $A \rightarrow \Pi_{i=k_1}^{k_2}
B_1^{i^{c_1}}\cdots B_t^{i^{c_t}}$, for which we show how to extract any
substring of length $\lambda$, from the represented text $T[1.. n]$, in time
$O(\lambda + \log^2 n\log\log n)$. This is the first compressed representation
for repetitive texts breaking $\delta$ while, at the same time, supporting
direct access to arbitrary text symbols in polylogarithmic time. As a
byproduct, we extend Ganardi et al.'s technique to balance any SLP (so it has a
derivation tree of logarithmic height) to a wide generalization of SLPs,
including ISLPs.
| [
{
"created": "Wed, 14 Feb 2024 15:21:37 GMT",
"version": "v1"
},
{
"created": "Thu, 15 Feb 2024 09:21:39 GMT",
"version": "v2"
}
] | 2024-02-16 | [
[
"Navarro",
"Gonzalo",
""
],
[
"Urbina",
"Cristian",
""
]
] | We explore an extension to straight-line programs (SLPs) that outperforms, for some text families, the measure $\delta$ based on substring complexity, a lower bound for most measures and compressors exploiting repetitiveness (which are crucial in areas like Bioinformatics). The extension, called iterated SLPs (ISLPs), allows rules of the form $A \rightarrow \Pi_{i=k_1}^{k_2} B_1^{i^{c_1}}\cdots B_t^{i^{c_t}}$, for which we show how to extract any substring of length $\lambda$, from the represented text $T[1.. n]$, in time $O(\lambda + \log^2 n\log\log n)$. This is the first compressed representation for repetitive texts breaking $\delta$ while, at the same time, supporting direct access to arbitrary text symbols in polylogarithmic time. As a byproduct, we extend Ganardi et al.'s technique to balance any SLP (so it has a derivation tree of logarithmic height) to a wide generalization of SLPs, including ISLPs. |
2403.01693 | Supreeth Narasimhaswamy | Supreeth Narasimhaswamy, Uttaran Bhattacharya, Xiang Chen, Ishita
Dasgupta, Saayan Mitra, Minh Hoai | HanDiffuser: Text-to-Image Generation With Realistic Hand Appearances | Revisions: 1. Added a link to project page in the abstract, 2.
Updated references and related work, 3. Fixed some grammatical errors | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | Text-to-image generative models can generate high-quality humans, but realism
is lost when generating hands. Common artifacts include irregular hand poses,
shapes, incorrect numbers of fingers, and physically implausible finger
orientations. To generate images with realistic hands, we propose a novel
diffusion-based architecture called HanDiffuser that achieves realism by
injecting hand embeddings in the generative process. HanDiffuser consists of
two components: a Text-to-Hand-Params diffusion model to generate SMPL-Body and
MANO-Hand parameters from input text prompts, and a Text-Guided
Hand-Params-to-Image diffusion model to synthesize images by conditioning on
the prompts and hand parameters generated by the previous component. We
incorporate multiple aspects of hand representation, including 3D shapes and
joint-level finger positions, orientations and articulations, for robust
learning and reliable performance during inference. We conduct extensive
quantitative and qualitative experiments and perform user studies to
demonstrate the efficacy of our method in generating images with high-quality
hands.
| [
{
"created": "Mon, 4 Mar 2024 03:00:22 GMT",
"version": "v1"
},
{
"created": "Mon, 22 Apr 2024 03:53:51 GMT",
"version": "v2"
}
] | 2024-04-23 | [
[
"Narasimhaswamy",
"Supreeth",
""
],
[
"Bhattacharya",
"Uttaran",
""
],
[
"Chen",
"Xiang",
""
],
[
"Dasgupta",
"Ishita",
""
],
[
"Mitra",
"Saayan",
""
],
[
"Hoai",
"Minh",
""
]
] | Text-to-image generative models can generate high-quality humans, but realism is lost when generating hands. Common artifacts include irregular hand poses, shapes, incorrect numbers of fingers, and physically implausible finger orientations. To generate images with realistic hands, we propose a novel diffusion-based architecture called HanDiffuser that achieves realism by injecting hand embeddings in the generative process. HanDiffuser consists of two components: a Text-to-Hand-Params diffusion model to generate SMPL-Body and MANO-Hand parameters from input text prompts, and a Text-Guided Hand-Params-to-Image diffusion model to synthesize images by conditioning on the prompts and hand parameters generated by the previous component. We incorporate multiple aspects of hand representation, including 3D shapes and joint-level finger positions, orientations and articulations, for robust learning and reliable performance during inference. We conduct extensive quantitative and qualitative experiments and perform user studies to demonstrate the efficacy of our method in generating images with high-quality hands. |
2407.17425 | Solomia Fedushko | Ivan Khoma, Solomia Fedushko, and Zoryana Kunch | Media Manipulations in the Coverage of Events of the Ukrainian
Revolution of Dignity: Historical, Linguistic, and Psychological Approaches | 14 pages | null | null | null | cs.CY cs.CL | http://creativecommons.org/licenses/by/4.0/ | This article examines the use of manipulation in the coverage of events of
the Ukrainian Revolution of Dignity in the mass media, namely in the content of
the online newspaper Ukrainian Truth (Ukrainska pravda), online newspaper High
Castle (Vysokyi Zamok), and online newspaper ZIK during the public protest,
namely during the Ukrainian Revolution of Dignity. Contents of these online
newspapers the historical, linguistic, and psychological approaches are used.
Also media manipulations in the coverage of events of the Ukrainian Revolution
of Dignity are studied. Internet resources that cover news are analyzed.
Current and most popular Internet resources are identified. The content of
online newspapers is analyzed and statistically processed. Internet content of
newspapers by the level of significance of data (very significant data,
significant data and insignificant data) is classified. The algorithm of
detection of the media manipulations in the highlighting the course of the
Ukrainian revolutions based on historical, linguistic, and psychological
approaches is designed. Methods of counteracting information attacks in online
newspapers are developed.
| [
{
"created": "Tue, 9 Jul 2024 09:46:27 GMT",
"version": "v1"
}
] | 2024-07-25 | [
[
"Khoma",
"Ivan",
""
],
[
"Fedushko",
"Solomia",
""
],
[
"Kunch",
"Zoryana",
""
]
] | This article examines the use of manipulation in the coverage of events of the Ukrainian Revolution of Dignity in the mass media, namely in the content of the online newspaper Ukrainian Truth (Ukrainska pravda), online newspaper High Castle (Vysokyi Zamok), and online newspaper ZIK during the public protest, namely during the Ukrainian Revolution of Dignity. Contents of these online newspapers the historical, linguistic, and psychological approaches are used. Also media manipulations in the coverage of events of the Ukrainian Revolution of Dignity are studied. Internet resources that cover news are analyzed. Current and most popular Internet resources are identified. The content of online newspapers is analyzed and statistically processed. Internet content of newspapers by the level of significance of data (very significant data, significant data and insignificant data) is classified. The algorithm of detection of the media manipulations in the highlighting the course of the Ukrainian revolutions based on historical, linguistic, and psychological approaches is designed. Methods of counteracting information attacks in online newspapers are developed. |
2211.12318 | Michael Mislove | Jason Z. S. Hu, Brigitte Pientka | A Categorical Normalization Proof for the Modal Lambda-Calculus | null | Electronic Notes in Theoretical Informatics and Computer Science,
Volume 1 - Proceedings of MFPS XXXVIII (February 22, 2023) entics:10360 | 10.46298/entics.10360 | null | cs.PL cs.LO | http://creativecommons.org/licenses/by/4.0/ | We investigate a simply typed modal $\lambda$-calculus,
$\lambda^{\to\square}$, due to Pfenning, Wong and Davies, where we define a
well-typed term with respect to a context stack that captures the possible
world semantics in a syntactic way. It provides logical foundation for
multi-staged meta-programming. Our main contribution in this paper is a
normalization by evaluation (NbE) algorithm for $\lambda^{\to\square}$ which we
prove sound and complete. The NbE algorithm is a moderate extension to the
standard presheaf model of simply typed $\lambda$-calculus. However, central to
the model construction and the NbE algorithm is the observation of Kripke-style
substitutions on context stacks which brings together two previously separate
concepts, structural modal transformations on context stacks and substitutions
for individual assumptions. Moreover, Kripke-style substitutions allow us to
give a formulation for contextual types, which can represent open code in a
meta-programming setting. Our work lays the foundation for extending the
logical foundation by Pfenning, Wong, and Davies towards building a practical,
dependently typed foundation for meta-programming.
| [
{
"created": "Tue, 22 Nov 2022 15:12:17 GMT",
"version": "v1"
},
{
"created": "Sun, 5 Feb 2023 23:30:32 GMT",
"version": "v2"
},
{
"created": "Wed, 15 Feb 2023 21:35:16 GMT",
"version": "v3"
},
{
"created": "Mon, 20 Feb 2023 16:10:06 GMT",
"version": "v4"
}
] | 2023-06-22 | [
[
"Hu",
"Jason Z. S.",
""
],
[
"Pientka",
"Brigitte",
""
]
] | We investigate a simply typed modal $\lambda$-calculus, $\lambda^{\to\square}$, due to Pfenning, Wong and Davies, where we define a well-typed term with respect to a context stack that captures the possible world semantics in a syntactic way. It provides logical foundation for multi-staged meta-programming. Our main contribution in this paper is a normalization by evaluation (NbE) algorithm for $\lambda^{\to\square}$ which we prove sound and complete. The NbE algorithm is a moderate extension to the standard presheaf model of simply typed $\lambda$-calculus. However, central to the model construction and the NbE algorithm is the observation of Kripke-style substitutions on context stacks which brings together two previously separate concepts, structural modal transformations on context stacks and substitutions for individual assumptions. Moreover, Kripke-style substitutions allow us to give a formulation for contextual types, which can represent open code in a meta-programming setting. Our work lays the foundation for extending the logical foundation by Pfenning, Wong, and Davies towards building a practical, dependently typed foundation for meta-programming. |
1702.02012 | Tanushri Chakravorty | Tanushri Chakravorty, Guillaume-Alexandre Bilodeau, Eric Granger | Tracking using Numerous Anchor points | Revised text version. Accepted for publication in Journal of Machine
Vision and Applications, December, 2017 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, an online adaptive model-free tracker is proposed to track
single objects in video sequences to deal with real-world tracking challenges
like low-resolution, object deformation, occlusion and motion blur. The novelty
lies in the construction of a strong appearance model that captures features
from the initialized bounding box and then are assembled into anchor-point
features. These features memorize the global pattern of the object and have an
internal star graph-like structure. These features are unique and flexible and
helps tracking generic and deformable objects with no limitation on specific
objects. In addition, the relevance of each feature is evaluated online using
short-term consistency and long-term consistency. These parameters are adapted
to retain consistent features that vote for the object location and that deal
with outliers for long-term tracking scenarios. Additionally, voting in a
Gaussian manner helps in tackling inherent noise of the tracking system and in
accurate object localization. Furthermore, the proposed tracker uses pairwise
distance measure to cope with scale variations and combines pixel-level binary
features and global weighted color features for model update. Finally,
experimental results on a visual tracking benchmark dataset are presented to
demonstrate the effectiveness and competitiveness of the proposed tracker.
| [
{
"created": "Tue, 7 Feb 2017 13:51:06 GMT",
"version": "v1"
},
{
"created": "Sun, 10 Dec 2017 21:31:28 GMT",
"version": "v2"
}
] | 2017-12-12 | [
[
"Chakravorty",
"Tanushri",
""
],
[
"Bilodeau",
"Guillaume-Alexandre",
""
],
[
"Granger",
"Eric",
""
]
] | In this paper, an online adaptive model-free tracker is proposed to track single objects in video sequences to deal with real-world tracking challenges like low-resolution, object deformation, occlusion and motion blur. The novelty lies in the construction of a strong appearance model that captures features from the initialized bounding box and then are assembled into anchor-point features. These features memorize the global pattern of the object and have an internal star graph-like structure. These features are unique and flexible and helps tracking generic and deformable objects with no limitation on specific objects. In addition, the relevance of each feature is evaluated online using short-term consistency and long-term consistency. These parameters are adapted to retain consistent features that vote for the object location and that deal with outliers for long-term tracking scenarios. Additionally, voting in a Gaussian manner helps in tackling inherent noise of the tracking system and in accurate object localization. Furthermore, the proposed tracker uses pairwise distance measure to cope with scale variations and combines pixel-level binary features and global weighted color features for model update. Finally, experimental results on a visual tracking benchmark dataset are presented to demonstrate the effectiveness and competitiveness of the proposed tracker. |
1909.02496 | Ping Li | Hang Zhang, Martin Slawski, Ping Li | The Benefits of Diversity: Permutation Recovery in Unlabeled Sensing
from Multiple Measurement Vectors | null | null | null | null | cs.IT cs.LG math.IT stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In "Unlabeled Sensing", one observes a set of linear measurements of an
underlying signal with incomplete or missing information about their ordering,
which can be modeled in terms of an unknown permutation. Previous work on the
case of a single noisy measurement vector has exposed two main challenges: 1) a
high requirement concerning the \emph{signal-to-noise ratio} ($\snr$), i.e.,
approximately of the order of $n^{5}$, and 2) a massive computational burden in
light of NP-hardness in general.
In this paper, we study the case of \emph{multiple} noisy measurement vectors
(MMVs) resulting from a \emph{common} permutation and investigate to what
extent the number of MMVs $m$ facilitates permutation recovery by "borrowing
strength". The above two challenges have at least partially been resolved
within our work. First, we show that a large stable rank of the signal
significantly reduces the required snr which can drop from a polynomial in $n$
for $m = 1$ to a constant for $m = \Omega(\log n)$, where $m$ denotes the
number of MMVs and $n$ denotes the number of measurements per MV. This bound is
shown to be sharp and is associated with a phase transition phenomenon. Second,
we propose a computational scheme for recovering the unknown permutation in
practice. For the "oracle case" with the known signal, the maximum likelihood
(ML) estimator reduces to a linear assignment problem whose global optimum can
be obtained efficiently. For the case in which both the signal and permutation
are unknown, the problem is reformulated as a bi-convex optimization problem
with an auxiliary variable, which can be solved by the Alternating Direction
Method of Multipliers (ADMM). Numerical experiments based on the proposed
computational scheme confirm the tightness of our theoretical analysis.
| [
{
"created": "Thu, 5 Sep 2019 15:55:59 GMT",
"version": "v1"
},
{
"created": "Sat, 11 Jul 2020 05:49:06 GMT",
"version": "v2"
}
] | 2020-07-14 | [
[
"Zhang",
"Hang",
""
],
[
"Slawski",
"Martin",
""
],
[
"Li",
"Ping",
""
]
] | In "Unlabeled Sensing", one observes a set of linear measurements of an underlying signal with incomplete or missing information about their ordering, which can be modeled in terms of an unknown permutation. Previous work on the case of a single noisy measurement vector has exposed two main challenges: 1) a high requirement concerning the \emph{signal-to-noise ratio} ($\snr$), i.e., approximately of the order of $n^{5}$, and 2) a massive computational burden in light of NP-hardness in general. In this paper, we study the case of \emph{multiple} noisy measurement vectors (MMVs) resulting from a \emph{common} permutation and investigate to what extent the number of MMVs $m$ facilitates permutation recovery by "borrowing strength". The above two challenges have at least partially been resolved within our work. First, we show that a large stable rank of the signal significantly reduces the required snr which can drop from a polynomial in $n$ for $m = 1$ to a constant for $m = \Omega(\log n)$, where $m$ denotes the number of MMVs and $n$ denotes the number of measurements per MV. This bound is shown to be sharp and is associated with a phase transition phenomenon. Second, we propose a computational scheme for recovering the unknown permutation in practice. For the "oracle case" with the known signal, the maximum likelihood (ML) estimator reduces to a linear assignment problem whose global optimum can be obtained efficiently. For the case in which both the signal and permutation are unknown, the problem is reformulated as a bi-convex optimization problem with an auxiliary variable, which can be solved by the Alternating Direction Method of Multipliers (ADMM). Numerical experiments based on the proposed computational scheme confirm the tightness of our theoretical analysis. |
2402.01444 | Esther Rolf | Esther Rolf, Konstantin Klemmer, Caleb Robinson, Hannah Kerner | Mission Critical -- Satellite Data is a Distinct Modality in Machine
Learning | 15 pages, 5 figures | null | null | null | cs.LG cs.AI cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Satellite data has the potential to inspire a seismic shift for machine
learning -- one in which we rethink existing practices designed for traditional
data modalities. As machine learning for satellite data (SatML) gains traction
for its real-world impact, our field is at a crossroads. We can either continue
applying ill-suited approaches, or we can initiate a new research agenda that
centers around the unique characteristics and challenges of satellite data.
This position paper argues that satellite data constitutes a distinct modality
for machine learning research and that we must recognize it as such to advance
the quality and impact of SatML research across theory, methods, and
deployment. We outline critical discussion questions and actionable suggestions
to transform SatML from merely an intriguing application area to a dedicated
research discipline that helps move the needle on big challenges for machine
learning and society.
| [
{
"created": "Fri, 2 Feb 2024 14:36:50 GMT",
"version": "v1"
}
] | 2024-02-05 | [
[
"Rolf",
"Esther",
""
],
[
"Klemmer",
"Konstantin",
""
],
[
"Robinson",
"Caleb",
""
],
[
"Kerner",
"Hannah",
""
]
] | Satellite data has the potential to inspire a seismic shift for machine learning -- one in which we rethink existing practices designed for traditional data modalities. As machine learning for satellite data (SatML) gains traction for its real-world impact, our field is at a crossroads. We can either continue applying ill-suited approaches, or we can initiate a new research agenda that centers around the unique characteristics and challenges of satellite data. This position paper argues that satellite data constitutes a distinct modality for machine learning research and that we must recognize it as such to advance the quality and impact of SatML research across theory, methods, and deployment. We outline critical discussion questions and actionable suggestions to transform SatML from merely an intriguing application area to a dedicated research discipline that helps move the needle on big challenges for machine learning and society. |
2212.07407 | Karl Pertsch | Karl Pertsch, Ruta Desai, Vikash Kumar, Franziska Meier, Joseph J.
Lim, Dhruv Batra, Akshara Rai | Cross-Domain Transfer via Semantic Skill Imitation | Project website: https://kpertsch.github.io/star | CoRL 2022 | null | null | cs.LG cs.RO | http://creativecommons.org/licenses/by/4.0/ | We propose an approach for semantic imitation, which uses demonstrations from
a source domain, e.g. human videos, to accelerate reinforcement learning (RL)
in a different target domain, e.g. a robotic manipulator in a simulated
kitchen. Instead of imitating low-level actions like joint velocities, our
approach imitates the sequence of demonstrated semantic skills like "opening
the microwave" or "turning on the stove". This allows us to transfer
demonstrations across environments (e.g. real-world to simulated kitchen) and
agent embodiments (e.g. bimanual human demonstration to robotic arm). We
evaluate on three challenging cross-domain learning problems and match the
performance of demonstration-accelerated RL approaches that require in-domain
demonstrations. In a simulated kitchen environment, our approach learns
long-horizon robot manipulation tasks, using less than 3 minutes of human video
demonstrations from a real-world kitchen. This enables scaling robot learning
via the reuse of demonstrations, e.g. collected as human videos, for learning
in any number of target domains.
| [
{
"created": "Wed, 14 Dec 2022 18:46:14 GMT",
"version": "v1"
}
] | 2022-12-15 | [
[
"Pertsch",
"Karl",
""
],
[
"Desai",
"Ruta",
""
],
[
"Kumar",
"Vikash",
""
],
[
"Meier",
"Franziska",
""
],
[
"Lim",
"Joseph J.",
""
],
[
"Batra",
"Dhruv",
""
],
[
"Rai",
"Akshara",
""
]
] | We propose an approach for semantic imitation, which uses demonstrations from a source domain, e.g. human videos, to accelerate reinforcement learning (RL) in a different target domain, e.g. a robotic manipulator in a simulated kitchen. Instead of imitating low-level actions like joint velocities, our approach imitates the sequence of demonstrated semantic skills like "opening the microwave" or "turning on the stove". This allows us to transfer demonstrations across environments (e.g. real-world to simulated kitchen) and agent embodiments (e.g. bimanual human demonstration to robotic arm). We evaluate on three challenging cross-domain learning problems and match the performance of demonstration-accelerated RL approaches that require in-domain demonstrations. In a simulated kitchen environment, our approach learns long-horizon robot manipulation tasks, using less than 3 minutes of human video demonstrations from a real-world kitchen. This enables scaling robot learning via the reuse of demonstrations, e.g. collected as human videos, for learning in any number of target domains. |
1509.08560 | EPTCS | Luca Bortolussi (Saarland University, University of Trieste,
ISTI-CNR), Rocco De Nicola (IMT Lucca), Vashti Galpin (University of
Edinburgh), Stephen Gilmore (University of Edinburgh), Jane Hillston
(University of Edinburgh), Diego Latella (ISTI-CNR), Michele Loreti
(Universit\`a di Firenze, IMT Lucca), Mieke Massink (ISTI-CNR) | CARMA: Collective Adaptive Resource-sharing Markovian Agents | In Proceedings QAPL 2015, arXiv:1509.08169 | EPTCS 194, 2015, pp. 16-31 | 10.4204/EPTCS.194.2 | null | cs.PL cs.DC cs.PF | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we present CARMA, a language recently defined to support
specification and analysis of collective adaptive systems. CARMA is a
stochastic process algebra equipped with linguistic constructs specifically
developed for modelling and programming systems that can operate in open-ended
and unpredictable environments. This class of systems is typically composed of
a huge number of interacting agents that dynamically adjust and combine their
behaviour to achieve specific goals. A CARMA model, termed a collective,
consists of a set of components, each of which exhibits a set of attributes. To
model dynamic aggregations, which are sometimes referred to as ensembles, CARMA
provides communication primitives that are based on predicates over the
exhibited attributes. These predicates are used to select the participants in a
communication. Two communication mechanisms are provided in the CARMA language:
multicast-based and unicast-based. In this paper, we first introduce the basic
principles of CARMA and then we show how our language can be used to support
specification with a simple but illustrative example of a socio-technical
collective adaptive system.
| [
{
"created": "Tue, 29 Sep 2015 02:10:19 GMT",
"version": "v1"
}
] | 2015-09-30 | [
[
"Bortolussi",
"Luca",
"",
"Saarland University, University of Trieste,\n ISTI-CNR"
],
[
"De Nicola",
"Rocco",
"",
"IMT Lucca"
],
[
"Galpin",
"Vashti",
"",
"University of\n Edinburgh"
],
[
"Gilmore",
"Stephen",
"",
"University of Edinburgh"
],
[
"Hillston",
"Jane",
"",
"University of Edinburgh"
],
[
"Latella",
"Diego",
"",
"ISTI-CNR"
],
[
"Loreti",
"Michele",
"",
"Università di Firenze, IMT Lucca"
],
[
"Massink",
"Mieke",
"",
"ISTI-CNR"
]
] | In this paper we present CARMA, a language recently defined to support specification and analysis of collective adaptive systems. CARMA is a stochastic process algebra equipped with linguistic constructs specifically developed for modelling and programming systems that can operate in open-ended and unpredictable environments. This class of systems is typically composed of a huge number of interacting agents that dynamically adjust and combine their behaviour to achieve specific goals. A CARMA model, termed a collective, consists of a set of components, each of which exhibits a set of attributes. To model dynamic aggregations, which are sometimes referred to as ensembles, CARMA provides communication primitives that are based on predicates over the exhibited attributes. These predicates are used to select the participants in a communication. Two communication mechanisms are provided in the CARMA language: multicast-based and unicast-based. In this paper, we first introduce the basic principles of CARMA and then we show how our language can be used to support specification with a simple but illustrative example of a socio-technical collective adaptive system. |
1207.1411 | Finnegan Southey | Finnegan Southey, Michael P. Bowling, Bryce Larson, Carmelo Piccione,
Neil Burch, Darse Billings, Chris Rayner | Bayes' Bluff: Opponent Modelling in Poker | Appears in Proceedings of the Twenty-First Conference on Uncertainty
in Artificial Intelligence (UAI2005) | null | null | UAI-P-2005-PG-550-558 | cs.GT cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Poker is a challenging problem for artificial intelligence, with
non-deterministic dynamics, partial observability, and the added difficulty of
unknown adversaries. Modelling all of the uncertainties in this domain is not
an easy task. In this paper we present a Bayesian probabilistic model for a
broad class of poker games, separating the uncertainty in the game dynamics
from the uncertainty of the opponent's strategy. We then describe approaches to
two key subproblems: (i) inferring a posterior over opponent strategies given a
prior distribution and observations of their play, and (ii) playing an
appropriate response to that distribution. We demonstrate the overall approach
on a reduced version of poker using Dirichlet priors and then on the full game
of Texas hold'em using a more informed prior. We demonstrate methods for
playing effective responses to the opponent, based on the posterior.
| [
{
"created": "Wed, 4 Jul 2012 16:22:47 GMT",
"version": "v1"
}
] | 2012-07-09 | [
[
"Southey",
"Finnegan",
""
],
[
"Bowling",
"Michael P.",
""
],
[
"Larson",
"Bryce",
""
],
[
"Piccione",
"Carmelo",
""
],
[
"Burch",
"Neil",
""
],
[
"Billings",
"Darse",
""
],
[
"Rayner",
"Chris",
""
]
] | Poker is a challenging problem for artificial intelligence, with non-deterministic dynamics, partial observability, and the added difficulty of unknown adversaries. Modelling all of the uncertainties in this domain is not an easy task. In this paper we present a Bayesian probabilistic model for a broad class of poker games, separating the uncertainty in the game dynamics from the uncertainty of the opponent's strategy. We then describe approaches to two key subproblems: (i) inferring a posterior over opponent strategies given a prior distribution and observations of their play, and (ii) playing an appropriate response to that distribution. We demonstrate the overall approach on a reduced version of poker using Dirichlet priors and then on the full game of Texas hold'em using a more informed prior. We demonstrate methods for playing effective responses to the opponent, based on the posterior. |
2212.11414 | Armin Zirak | Armin Zirak and Hadi Hemmati | Improving Automated Program Repair with Domain Adaptation | 43 pages | null | null | null | cs.SE cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Automated Program Repair (APR) is defined as the process of fixing a
bug/defect in the source code, by an automated tool. APR tools have recently
experienced promising results by leveraging state-of-the-art Neural Language
Processing (NLP) techniques. APR tools such as TFix and CodeXGLUE combine
text-to-text transformers with software-specific techniques are outperforming
alternatives, these days. However, in most APR studies the train and test sets
are chosen from the same set of projects. In reality, however, APR models are
meant to be generalizable to new and different projects. Therefore, there is a
potential threat that reported APR models with high effectiveness perform
poorly when the characteristics of the new project or its bugs are different
than the training set's(Domain Shift).
In this study, we first define and measure the domain shift problem in
automated program repair. Then, we then propose a domain adaptation framework
that can adapt an APR model for a given target project. We conduct an empirical
study with three domain adaptation methods FullFineTuning,
TuningWithLightWeightAdapterLayers, and CurriculumLearning using two
state-of-the-art domain adaptation tools (TFix and CodeXGLUE) and two APR
models on 611 bugs from 19 projects. The results show that our proposed
framework can improve the effectiveness of TFix by 13.05% and CodeXGLUE by
23.4%. Another contribution of this study is the proposal of a data synthesis
method to address the lack of labelled data in APR. We leverage transformers to
create a bug generator model. We use the generated synthetic data to domain
adapt TFix and CodeXGLUE on the projects with no data (Zero-shot learning),
which results in an average improvement of 5.76% and 24.42% for TFix and
CodeXGLUE, respectively.
| [
{
"created": "Wed, 21 Dec 2022 23:52:09 GMT",
"version": "v1"
}
] | 2023-01-13 | [
[
"Zirak",
"Armin",
""
],
[
"Hemmati",
"Hadi",
""
]
] | Automated Program Repair (APR) is defined as the process of fixing a bug/defect in the source code, by an automated tool. APR tools have recently experienced promising results by leveraging state-of-the-art Neural Language Processing (NLP) techniques. APR tools such as TFix and CodeXGLUE combine text-to-text transformers with software-specific techniques are outperforming alternatives, these days. However, in most APR studies the train and test sets are chosen from the same set of projects. In reality, however, APR models are meant to be generalizable to new and different projects. Therefore, there is a potential threat that reported APR models with high effectiveness perform poorly when the characteristics of the new project or its bugs are different than the training set's(Domain Shift). In this study, we first define and measure the domain shift problem in automated program repair. Then, we then propose a domain adaptation framework that can adapt an APR model for a given target project. We conduct an empirical study with three domain adaptation methods FullFineTuning, TuningWithLightWeightAdapterLayers, and CurriculumLearning using two state-of-the-art domain adaptation tools (TFix and CodeXGLUE) and two APR models on 611 bugs from 19 projects. The results show that our proposed framework can improve the effectiveness of TFix by 13.05% and CodeXGLUE by 23.4%. Another contribution of this study is the proposal of a data synthesis method to address the lack of labelled data in APR. We leverage transformers to create a bug generator model. We use the generated synthetic data to domain adapt TFix and CodeXGLUE on the projects with no data (Zero-shot learning), which results in an average improvement of 5.76% and 24.42% for TFix and CodeXGLUE, respectively. |
1001.5100 | Xiwang Cao | Xiwang Cao and Lei Hu | On Exponential Sums, Nowton identities and Dickson Polynomials over
Finite Fields | 18 pages | null | null | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Let $\mathbb{F}_{q}$ be a finite field, $\mathbb{F}_{q^s}$ be an extension of
$\mathbb{F}_q$, let $f(x)\in \mathbb{F}_q[x]$ be a polynomial of degree $n$
with $\gcd(n,q)=1$. We present a recursive formula for evaluating the
exponential sum $\sum_{c\in \mathbb{F}_{q^s}}\chi^{(s)}(f(x))$. Let $a$ and $b$
be two elements in $\mathbb{F}_q$ with $a\neq 0$, $u$ be a positive integer. We
obtain an estimate for the exponential sum $\sum_{c\in
\mathbb{F}^*_{q^s}}\chi^{(s)}(ac^u+bc^{-1})$, where $\chi^{(s)}$ is the lifting
of an additive character $\chi$ of $\mathbb{F}_q$. Some properties of the
sequences constructed from these exponential sums are provided also.
| [
{
"created": "Thu, 28 Jan 2010 04:50:32 GMT",
"version": "v1"
}
] | 2010-01-29 | [
[
"Cao",
"Xiwang",
""
],
[
"Hu",
"Lei",
""
]
] | Let $\mathbb{F}_{q}$ be a finite field, $\mathbb{F}_{q^s}$ be an extension of $\mathbb{F}_q$, let $f(x)\in \mathbb{F}_q[x]$ be a polynomial of degree $n$ with $\gcd(n,q)=1$. We present a recursive formula for evaluating the exponential sum $\sum_{c\in \mathbb{F}_{q^s}}\chi^{(s)}(f(x))$. Let $a$ and $b$ be two elements in $\mathbb{F}_q$ with $a\neq 0$, $u$ be a positive integer. We obtain an estimate for the exponential sum $\sum_{c\in \mathbb{F}^*_{q^s}}\chi^{(s)}(ac^u+bc^{-1})$, where $\chi^{(s)}$ is the lifting of an additive character $\chi$ of $\mathbb{F}_q$. Some properties of the sequences constructed from these exponential sums are provided also. |
1805.09302 | Safa Cicek | Safa Cicek and Stefano Soatto | Input and Weight Space Smoothing for Semi-supervised Learning | null | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose regularizing the empirical loss for semi-supervised learning by
acting on both the input (data) space, and the weight (parameter) space. We
show that the two are not equivalent, and in fact are complementary, one
affecting the minimality of the resulting representation, the other
insensitivity to nuisance variability. We propose a method to perform such
smoothing, which combines known input-space smoothing with a novel weight-space
smoothing, based on a min-max (adversarial) optimization. The resulting
Adversarial Block Coordinate Descent (ABCD) algorithm performs gradient ascent
with a small learning rate for a random subset of the weights, and standard
gradient descent on the remaining weights in the same mini-batch. It achieves
comparable performance to the state-of-the-art without resorting to heavy data
augmentation, using a relatively simple architecture.
| [
{
"created": "Wed, 23 May 2018 17:39:38 GMT",
"version": "v1"
}
] | 2018-05-24 | [
[
"Cicek",
"Safa",
""
],
[
"Soatto",
"Stefano",
""
]
] | We propose regularizing the empirical loss for semi-supervised learning by acting on both the input (data) space, and the weight (parameter) space. We show that the two are not equivalent, and in fact are complementary, one affecting the minimality of the resulting representation, the other insensitivity to nuisance variability. We propose a method to perform such smoothing, which combines known input-space smoothing with a novel weight-space smoothing, based on a min-max (adversarial) optimization. The resulting Adversarial Block Coordinate Descent (ABCD) algorithm performs gradient ascent with a small learning rate for a random subset of the weights, and standard gradient descent on the remaining weights in the same mini-batch. It achieves comparable performance to the state-of-the-art without resorting to heavy data augmentation, using a relatively simple architecture. |
cs/0312056 | Christian Duncan | Christian A. Duncan (University of Miami), David Eppstein (University
of California, Irvine), Stephen G. Kobourov (University of Arizona) | The Geometric Thickness of Low Degree Graphs | 10 pages, 7 figures, submitted to SoCG 2004 | null | null | null | cs.CG cs.DM | null | We prove that the geometric thickness of graphs whose maximum degree is no
more than four is two. All of our algorithms run in O(n) time, where n is the
number of vertices in the graph. In our proofs, we present an embedding
algorithm for graphs with maximum degree three that uses an n x n grid and a
more complex algorithm for embedding a graph with maximum degree four. We also
show a variation using orthogonal edges for maximum degree four graphs that
also uses an n x n grid. The results have implications in graph theory, graph
drawing, and VLSI design.
| [
{
"created": "Wed, 24 Dec 2003 06:59:09 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Duncan",
"Christian A.",
"",
"University of Miami"
],
[
"Eppstein",
"David",
"",
"University\n of California, Irvine"
],
[
"Kobourov",
"Stephen G.",
"",
"University of Arizona"
]
] | We prove that the geometric thickness of graphs whose maximum degree is no more than four is two. All of our algorithms run in O(n) time, where n is the number of vertices in the graph. In our proofs, we present an embedding algorithm for graphs with maximum degree three that uses an n x n grid and a more complex algorithm for embedding a graph with maximum degree four. We also show a variation using orthogonal edges for maximum degree four graphs that also uses an n x n grid. The results have implications in graph theory, graph drawing, and VLSI design. |
2302.09173 | Lajanugen Logeswaran | Lajanugen Logeswaran, Sungryull Sohn, Yunseok Jang, Moontae Lee,
Honglak Lee | Unsupervised Task Graph Generation from Instructional Video Transcripts | Findings of ACL 2023 | null | null | null | cs.AI cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This work explores the problem of generating task graphs of real-world
activities. Different from prior formulations, we consider a setting where text
transcripts of instructional videos performing a real-world activity (e.g.,
making coffee) are provided and the goal is to identify the key steps relevant
to the task as well as the dependency relationship between these key steps. We
propose a novel task graph generation approach that combines the reasoning
capabilities of instruction-tuned language models along with clustering and
ranking components to generate accurate task graphs in a completely
unsupervised manner. We show that the proposed approach generates more accurate
task graphs compared to a supervised learning approach on tasks from the ProceL
and CrossTask datasets.
| [
{
"created": "Fri, 17 Feb 2023 22:50:08 GMT",
"version": "v1"
},
{
"created": "Tue, 2 May 2023 19:46:14 GMT",
"version": "v2"
}
] | 2023-05-04 | [
[
"Logeswaran",
"Lajanugen",
""
],
[
"Sohn",
"Sungryull",
""
],
[
"Jang",
"Yunseok",
""
],
[
"Lee",
"Moontae",
""
],
[
"Lee",
"Honglak",
""
]
] | This work explores the problem of generating task graphs of real-world activities. Different from prior formulations, we consider a setting where text transcripts of instructional videos performing a real-world activity (e.g., making coffee) are provided and the goal is to identify the key steps relevant to the task as well as the dependency relationship between these key steps. We propose a novel task graph generation approach that combines the reasoning capabilities of instruction-tuned language models along with clustering and ranking components to generate accurate task graphs in a completely unsupervised manner. We show that the proposed approach generates more accurate task graphs compared to a supervised learning approach on tasks from the ProceL and CrossTask datasets. |
2310.16191 | Zhuolin Yang | Zhuolin Yang, Zain Sarwar, Iris Hwang, Ronik Bhaskar, Ben Y. Zhao,
Haitao Zheng | Can Virtual Reality Protect Users from Keystroke Inference Attacks? | Accepted by USENIX 2024 | null | null | null | cs.CR | http://creativecommons.org/licenses/by/4.0/ | Virtual Reality (VR) has gained popularity by providing immersive and
interactive experiences without geographical limitations. It also provides a
sense of personal privacy through physical separation. In this paper, we show
that despite assumptions of enhanced privacy, VR is unable to shield its users
from side-channel attacks that steal private information. Ironically, this
vulnerability arises from VR's greatest strength, its immersive and interactive
nature. We demonstrate this by designing and implementing a new set of
keystroke inference attacks in shared virtual environments, where an attacker
(VR user) can recover the content typed by another VR user by observing their
avatar. While the avatar displays noisy telemetry of the user's hand motion, an
intelligent attacker can use that data to recognize typed keys and reconstruct
typed content, without knowing the keyboard layout or gathering labeled data.
We evaluate the proposed attacks using IRB-approved user studies across
multiple VR scenarios. For 13 out of 15 tested users, our attacks accurately
recognize 86%-98% of typed keys, and the recovered content retains up to 98% of
the meaning of the original typed content. We also discuss potential defenses.
| [
{
"created": "Tue, 24 Oct 2023 21:19:38 GMT",
"version": "v1"
}
] | 2023-10-26 | [
[
"Yang",
"Zhuolin",
""
],
[
"Sarwar",
"Zain",
""
],
[
"Hwang",
"Iris",
""
],
[
"Bhaskar",
"Ronik",
""
],
[
"Zhao",
"Ben Y.",
""
],
[
"Zheng",
"Haitao",
""
]
] | Virtual Reality (VR) has gained popularity by providing immersive and interactive experiences without geographical limitations. It also provides a sense of personal privacy through physical separation. In this paper, we show that despite assumptions of enhanced privacy, VR is unable to shield its users from side-channel attacks that steal private information. Ironically, this vulnerability arises from VR's greatest strength, its immersive and interactive nature. We demonstrate this by designing and implementing a new set of keystroke inference attacks in shared virtual environments, where an attacker (VR user) can recover the content typed by another VR user by observing their avatar. While the avatar displays noisy telemetry of the user's hand motion, an intelligent attacker can use that data to recognize typed keys and reconstruct typed content, without knowing the keyboard layout or gathering labeled data. We evaluate the proposed attacks using IRB-approved user studies across multiple VR scenarios. For 13 out of 15 tested users, our attacks accurately recognize 86%-98% of typed keys, and the recovered content retains up to 98% of the meaning of the original typed content. We also discuss potential defenses. |
1712.06763 | Fl\'avio Miyazawa | Y. Kohayakawa, F. K. Miyazawa, Y. Wakabayashi | A tight lower bound for an online hypercube packing problem and bounds
for prices of anarchy of a related game | null | Discrete Mathematics and Theoretical Computer Science, 23(3), 2021 | 10.46298/dmtcs.8325 | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We prove a tight lower bound on the asymptotic performance ratio $\rho$ of
the bounded space online $d$-hypercube bin packing problem, solving an open
question raised in 2005. In the classic $d$-hypercube bin packing problem, we
are given a sequence of $d$-dimensional hypercubes and we have an unlimited
number of bins, each of which is a $d$-dimensional unit hypercube. The goal is
to pack (orthogonally) the given hypercubes into the minimum possible number of
bins, in such a way that no two hypercubes in the same bin overlap. The bounded
space online $d$-hypercube bin packing problem is a variant of the
$d$-hypercube bin packing problem, in which the hypercubes arrive online and
each one must be packed in an open bin without the knowledge of the next
hypercubes. Moreover, at each moment, only a constant number of open bins are
allowed (whenever a new bin is used, it is considered open, and it remains so
until it is considered closed, in which case, it is not allowed to accept new
hypercubes). Epstein and van Stee [SIAM J. Comput. 35 (2005), no. 2, 431-448]
showed that $\rho$ is $\Omega(\log d)$ and $O(d/\log d)$, and conjectured that
it is $\Theta(\log d)$. We show that $\rho$ is in fact $\Theta(d/\log d)$. To
obtain this result, we elaborate on some ideas presented by those authors, and
go one step further showing how to obtain better (offline) packings of certain
special instances for which one knows how many bins any bounded space algorithm
has to use. Our main contribution establishes the existence of such packings,
for large enough $d$, using probabilistic arguments. Such packings also lead to
lower bounds for the prices of anarchy of the selfish $d$-hypercube bin packing
game. We present a lower bound of $\Omega(d/\log d)$ for the pure price of
anarchy of this game, and we also give a lower bound of $\Omega(\log d)$ for
its strong price of anarchy.
| [
{
"created": "Tue, 19 Dec 2017 03:15:14 GMT",
"version": "v1"
}
] | 2023-04-11 | [
[
"Kohayakawa",
"Y.",
""
],
[
"Miyazawa",
"F. K.",
""
],
[
"Wakabayashi",
"Y.",
""
]
] | We prove a tight lower bound on the asymptotic performance ratio $\rho$ of the bounded space online $d$-hypercube bin packing problem, solving an open question raised in 2005. In the classic $d$-hypercube bin packing problem, we are given a sequence of $d$-dimensional hypercubes and we have an unlimited number of bins, each of which is a $d$-dimensional unit hypercube. The goal is to pack (orthogonally) the given hypercubes into the minimum possible number of bins, in such a way that no two hypercubes in the same bin overlap. The bounded space online $d$-hypercube bin packing problem is a variant of the $d$-hypercube bin packing problem, in which the hypercubes arrive online and each one must be packed in an open bin without the knowledge of the next hypercubes. Moreover, at each moment, only a constant number of open bins are allowed (whenever a new bin is used, it is considered open, and it remains so until it is considered closed, in which case, it is not allowed to accept new hypercubes). Epstein and van Stee [SIAM J. Comput. 35 (2005), no. 2, 431-448] showed that $\rho$ is $\Omega(\log d)$ and $O(d/\log d)$, and conjectured that it is $\Theta(\log d)$. We show that $\rho$ is in fact $\Theta(d/\log d)$. To obtain this result, we elaborate on some ideas presented by those authors, and go one step further showing how to obtain better (offline) packings of certain special instances for which one knows how many bins any bounded space algorithm has to use. Our main contribution establishes the existence of such packings, for large enough $d$, using probabilistic arguments. Such packings also lead to lower bounds for the prices of anarchy of the selfish $d$-hypercube bin packing game. We present a lower bound of $\Omega(d/\log d)$ for the pure price of anarchy of this game, and we also give a lower bound of $\Omega(\log d)$ for its strong price of anarchy. |
0912.2479 | VishalGoyal | Vishal Goyal | Pervasive Emotions in Pervasive Computing Environments | This submission has been withdrawn by arXiv admin. It is a verbatim
copy of arXiv:0912.1810 with only the author name and title changed | null | null | null | cs.HC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This submission has been withdrawn by arXiv admin. It is a verbatim copy of
arXiv:0912.1810 with only the author name and title changed.
| [
{
"created": "Sun, 13 Dec 2009 06:10:42 GMT",
"version": "v1"
},
{
"created": "Tue, 12 Jan 2010 15:20:14 GMT",
"version": "v2"
}
] | 2010-01-12 | [
[
"Goyal",
"Vishal",
""
]
] | This submission has been withdrawn by arXiv admin. It is a verbatim copy of arXiv:0912.1810 with only the author name and title changed. |
1409.0706 | Shiyan Zhong | Shiyan Zhong | Efficient Scheme for Active Particle Selection in N-body Simulations | 8 pages, 4 figures | null | null | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose an efficient method for active particle selection, working with
Hermite Individual Time Steps (HITS) scheme in direct N-body simulation code
$\varphi$GRAPE. For a simulation with $N$ particles, this method can reduce the
computation complexity of active particle selection, from $O(N\cdot N_{step})$
to $O(\overline{N_{act}}\cdot N_{step})$, where $\overline{N_{act}}$ is the
average active particle number in every time step which is much smaller than
$N$ and $N_{step}$ is the total time steps integrated during the simulation.
Thus can save a lot of time spent on active particle selection part, especially
in the case of low $\overline{N_{act}}$.
| [
{
"created": "Tue, 2 Sep 2014 13:46:06 GMT",
"version": "v1"
},
{
"created": "Sat, 13 Sep 2014 07:02:14 GMT",
"version": "v2"
}
] | 2014-09-16 | [
[
"Zhong",
"Shiyan",
""
]
] | We propose an efficient method for active particle selection, working with Hermite Individual Time Steps (HITS) scheme in direct N-body simulation code $\varphi$GRAPE. For a simulation with $N$ particles, this method can reduce the computation complexity of active particle selection, from $O(N\cdot N_{step})$ to $O(\overline{N_{act}}\cdot N_{step})$, where $\overline{N_{act}}$ is the average active particle number in every time step which is much smaller than $N$ and $N_{step}$ is the total time steps integrated during the simulation. Thus can save a lot of time spent on active particle selection part, especially in the case of low $\overline{N_{act}}$. |
1809.00069 | Mingbo Ma | Liang Huang and Kai Zhao and Mingbo Ma | When to Finish? Optimal Beam Search for Neural Text Generation (modulo
beam size) | accepted by EMNLP 2017 | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In neural text generation such as neural machine translation, summarization,
and image captioning, beam search is widely used to improve the output text
quality. However, in the neural generation setting, hypotheses can finish in
different steps, which makes it difficult to decide when to end beam search to
ensure optimality. We propose a provably optimal beam search algorithm that
will always return the optimal-score complete hypothesis (modulo beam size),
and finish as soon as the optimality is established (finishing no later than
the baseline). To counter neural generation's tendency for shorter hypotheses,
we also introduce a bounded length reward mechanism which allows a modified
version of our beam search algorithm to remain optimal. Experiments on neural
machine translation demonstrate that our principled beam search algorithm leads
to improvement in BLEU score over previously proposed alternatives.
| [
{
"created": "Fri, 31 Aug 2018 22:01:48 GMT",
"version": "v1"
}
] | 2018-09-05 | [
[
"Huang",
"Liang",
""
],
[
"Zhao",
"Kai",
""
],
[
"Ma",
"Mingbo",
""
]
] | In neural text generation such as neural machine translation, summarization, and image captioning, beam search is widely used to improve the output text quality. However, in the neural generation setting, hypotheses can finish in different steps, which makes it difficult to decide when to end beam search to ensure optimality. We propose a provably optimal beam search algorithm that will always return the optimal-score complete hypothesis (modulo beam size), and finish as soon as the optimality is established (finishing no later than the baseline). To counter neural generation's tendency for shorter hypotheses, we also introduce a bounded length reward mechanism which allows a modified version of our beam search algorithm to remain optimal. Experiments on neural machine translation demonstrate that our principled beam search algorithm leads to improvement in BLEU score over previously proposed alternatives. |
2004.02501 | Jinshan Pan | Jinshan Pan, Haoran Bai, Jinhui Tang | Cascaded Deep Video Deblurring Using Temporal Sharpness Prior | CVPR 2020. The code is available at https://github.com/csbhr/CDVD-TSP | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a simple and effective deep convolutional neural network (CNN)
model for video deblurring. The proposed algorithm mainly consists of optical
flow estimation from intermediate latent frames and latent frame restoration
steps. It first develops a deep CNN model to estimate optical flow from
intermediate latent frames and then restores the latent frames based on the
estimated optical flow. To better explore the temporal information from videos,
we develop a temporal sharpness prior to constrain the deep CNN model to help
the latent frame restoration. We develop an effective cascaded training
approach and jointly train the proposed CNN model in an end-to-end manner. We
show that exploring the domain knowledge of video deblurring is able to make
the deep CNN model more compact and efficient. Extensive experimental results
show that the proposed algorithm performs favorably against state-of-the-art
methods on the benchmark datasets as well as real-world videos.
| [
{
"created": "Mon, 6 Apr 2020 09:13:49 GMT",
"version": "v1"
}
] | 2020-04-07 | [
[
"Pan",
"Jinshan",
""
],
[
"Bai",
"Haoran",
""
],
[
"Tang",
"Jinhui",
""
]
] | We present a simple and effective deep convolutional neural network (CNN) model for video deblurring. The proposed algorithm mainly consists of optical flow estimation from intermediate latent frames and latent frame restoration steps. It first develops a deep CNN model to estimate optical flow from intermediate latent frames and then restores the latent frames based on the estimated optical flow. To better explore the temporal information from videos, we develop a temporal sharpness prior to constrain the deep CNN model to help the latent frame restoration. We develop an effective cascaded training approach and jointly train the proposed CNN model in an end-to-end manner. We show that exploring the domain knowledge of video deblurring is able to make the deep CNN model more compact and efficient. Extensive experimental results show that the proposed algorithm performs favorably against state-of-the-art methods on the benchmark datasets as well as real-world videos. |
2204.00858 | Kunal Mittal | Uma Girish, Kunal Mittal, Ran Raz, Wei Zhan | Polynomial Bounds On Parallel Repetition For All 3-Player Games With
Binary Inputs | null | null | null | null | cs.CC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We prove that for every 3-player (3-prover) game $\mathcal G$ with value less
than one, whose query distribution has the support $\mathcal S = \{(1,0,0),
(0,1,0), (0,0,1)\}$ of hamming weight one vectors, the value of the $n$-fold
parallel repetition $\mathcal G^{\otimes n}$ decays polynomially fast to zero;
that is, there is a constant $c = c(\mathcal G)>0$ such that the value of the
game $\mathcal G^{\otimes n}$ is at most $n^{-c}$.
Following the recent work of Girish, Holmgren, Mittal, Raz and Zhan (STOC
2022), our result is the missing piece that implies a similar bound for a much
more general class of multiplayer games: For $\textbf{every}$ 3-player game
$\mathcal G$ over $\textit{binary questions}$ and $\textit{arbitrary answer
lengths}$, with value less than 1, there is a constant $c = c(\mathcal G)>0$
such that the value of the game $\mathcal G^{\otimes n}$ is at most $n^{-c}$.
Our proof technique is new and requires many new ideas. For example, we make
use of the Level-$k$ inequalities from Boolean Fourier Analysis, which, to the
best of our knowledge, have not been explored in this context prior to our
work.
| [
{
"created": "Sat, 2 Apr 2022 13:37:49 GMT",
"version": "v1"
}
] | 2022-04-05 | [
[
"Girish",
"Uma",
""
],
[
"Mittal",
"Kunal",
""
],
[
"Raz",
"Ran",
""
],
[
"Zhan",
"Wei",
""
]
] | We prove that for every 3-player (3-prover) game $\mathcal G$ with value less than one, whose query distribution has the support $\mathcal S = \{(1,0,0), (0,1,0), (0,0,1)\}$ of hamming weight one vectors, the value of the $n$-fold parallel repetition $\mathcal G^{\otimes n}$ decays polynomially fast to zero; that is, there is a constant $c = c(\mathcal G)>0$ such that the value of the game $\mathcal G^{\otimes n}$ is at most $n^{-c}$. Following the recent work of Girish, Holmgren, Mittal, Raz and Zhan (STOC 2022), our result is the missing piece that implies a similar bound for a much more general class of multiplayer games: For $\textbf{every}$ 3-player game $\mathcal G$ over $\textit{binary questions}$ and $\textit{arbitrary answer lengths}$, with value less than 1, there is a constant $c = c(\mathcal G)>0$ such that the value of the game $\mathcal G^{\otimes n}$ is at most $n^{-c}$. Our proof technique is new and requires many new ideas. For example, we make use of the Level-$k$ inequalities from Boolean Fourier Analysis, which, to the best of our knowledge, have not been explored in this context prior to our work. |
2011.09257 | Pablo Pino | Pablo Pino, Denis Parra, Pablo Messina, Cecilia Besa, Sergio Uribe | Inspecting state of the art performance and NLP metrics in image-based
medical report generation | 3 pages, 1 figure, 1 table. Accepted in LatinX in AI workshop at
NeurIPS 2020. (v3 updated ack) | null | null | null | cs.CL cs.AI cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Several deep learning architectures have been proposed over the last years to
deal with the problem of generating a written report given an imaging exam as
input. Most works evaluate the generated reports using standard Natural
Language Processing (NLP) metrics (e.g. BLEU, ROUGE), reporting significant
progress. In this article, we contrast this progress by comparing state of the
art (SOTA) models against weak baselines. We show that simple and even naive
approaches yield near SOTA performance on most traditional NLP metrics. We
conclude that evaluation methods in this task should be further studied towards
correctly measuring clinical accuracy, ideally involving physicians to
contribute to this end.
| [
{
"created": "Wed, 18 Nov 2020 13:09:12 GMT",
"version": "v1"
},
{
"created": "Sat, 21 Nov 2020 17:58:40 GMT",
"version": "v2"
},
{
"created": "Sat, 15 Jan 2022 06:05:51 GMT",
"version": "v3"
}
] | 2022-01-19 | [
[
"Pino",
"Pablo",
""
],
[
"Parra",
"Denis",
""
],
[
"Messina",
"Pablo",
""
],
[
"Besa",
"Cecilia",
""
],
[
"Uribe",
"Sergio",
""
]
] | Several deep learning architectures have been proposed over the last years to deal with the problem of generating a written report given an imaging exam as input. Most works evaluate the generated reports using standard Natural Language Processing (NLP) metrics (e.g. BLEU, ROUGE), reporting significant progress. In this article, we contrast this progress by comparing state of the art (SOTA) models against weak baselines. We show that simple and even naive approaches yield near SOTA performance on most traditional NLP metrics. We conclude that evaluation methods in this task should be further studied towards correctly measuring clinical accuracy, ideally involving physicians to contribute to this end. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.