id
stringlengths
9
10
submitter
stringlengths
1
64
authors
stringlengths
4
20.7k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
404
doi
stringlengths
11
153
report-no
stringlengths
2
254
categories
stringlengths
5
98
license
stringclasses
9 values
orig_abstract
stringlengths
14
3.35k
versions
listlengths
1
60
update_date
stringlengths
10
10
authors_parsed
listlengths
1
1.35k
abstract
stringlengths
11
3.34k
2206.06620
Weijie Chen
Rang Meng, Weijie Chen, Shicai Yang, Jie Song, Luojun Lin, Di Xie, Shiliang Pu, Xinchao Wang, Mingli Song, Yueting Zhuang
Slimmable Domain Adaptation
To appear in CVPR 2022. Code is coming soon: https://github.com/hikvision-research/SlimDA
IEEE/CVF Computer Vision and Pattern Recognition Conference (CVPR), 2022
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Vanilla unsupervised domain adaptation methods tend to optimize the model with fixed neural architecture, which is not very practical in real-world scenarios since the target data is usually processed by different resource-limited devices. It is therefore of great necessity to facilitate architecture adaptation across various devices. In this paper, we introduce a simple framework, Slimmable Domain Adaptation, to improve cross-domain generalization with a weight-sharing model bank, from which models of different capacities can be sampled to accommodate different accuracy-efficiency trade-offs. The main challenge in this framework lies in simultaneously boosting the adaptation performance of numerous models in the model bank. To tackle this problem, we develop a Stochastic EnsEmble Distillation method to fully exploit the complementary knowledge in the model bank for inter-model interaction. Nevertheless, considering the optimization conflict between inter-model interaction and intra-model adaptation, we augment the existing bi-classifier domain confusion architecture into an Optimization-Separated Tri-Classifier counterpart. After optimizing the model bank, architecture adaptation is leveraged via our proposed Unsupervised Performance Evaluation Metric. Under various resource constraints, our framework surpasses other competing approaches by a very large margin on multiple benchmarks. It is also worth emphasizing that our framework can preserve the performance improvement against the source-only model even when the computing complexity is reduced to $1/64$. Code will be available at https://github.com/hikvision-research/SlimDA.
[ { "created": "Tue, 14 Jun 2022 06:28:04 GMT", "version": "v1" } ]
2022-06-15
[ [ "Meng", "Rang", "" ], [ "Chen", "Weijie", "" ], [ "Yang", "Shicai", "" ], [ "Song", "Jie", "" ], [ "Lin", "Luojun", "" ], [ "Xie", "Di", "" ], [ "Pu", "Shiliang", "" ], [ "Wang", "Xinchao", "" ], [ "Song", "Mingli", "" ], [ "Zhuang", "Yueting", "" ] ]
Vanilla unsupervised domain adaptation methods tend to optimize the model with fixed neural architecture, which is not very practical in real-world scenarios since the target data is usually processed by different resource-limited devices. It is therefore of great necessity to facilitate architecture adaptation across various devices. In this paper, we introduce a simple framework, Slimmable Domain Adaptation, to improve cross-domain generalization with a weight-sharing model bank, from which models of different capacities can be sampled to accommodate different accuracy-efficiency trade-offs. The main challenge in this framework lies in simultaneously boosting the adaptation performance of numerous models in the model bank. To tackle this problem, we develop a Stochastic EnsEmble Distillation method to fully exploit the complementary knowledge in the model bank for inter-model interaction. Nevertheless, considering the optimization conflict between inter-model interaction and intra-model adaptation, we augment the existing bi-classifier domain confusion architecture into an Optimization-Separated Tri-Classifier counterpart. After optimizing the model bank, architecture adaptation is leveraged via our proposed Unsupervised Performance Evaluation Metric. Under various resource constraints, our framework surpasses other competing approaches by a very large margin on multiple benchmarks. It is also worth emphasizing that our framework can preserve the performance improvement against the source-only model even when the computing complexity is reduced to $1/64$. Code will be available at https://github.com/hikvision-research/SlimDA.
1305.0513
Ruoming Jin
Ruoming Jin, Yelong Shen, Lin Liu and Xue-wen Chen
Limiting the Neighborhood: De-Small-World Network for Outbreak Prevention
9 pages
null
null
null
cs.SI physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work, we study a basic and practically important strategy to help prevent and/or delay an outbreak in the context of network: limiting the contact between individuals. In this paper, we introduce the average neighborhood size as a new measure for the degree of being small-world and utilize it to formally define the desmall- world network problem. We also prove the NP-hardness of the general reachable pair cut problem and propose a greedy edge betweenness based approach as the benchmark in selecting the candidate edges for solving our problem. Furthermore, we transform the de-small-world network problem as an OR-AND Boolean function maximization problem, which is also an NP-hardness problem. In addition, we develop a numerical relaxation approach to solve the Boolean function maximization and the de-small-world problem. Also, we introduce the short-betweenness, which measures the edge importance in terms of all short paths with distance no greater than a certain threshold, and utilize it to speed up our numerical relaxation approach. The experimental evaluation demonstrates the effectiveness and efficiency of our approaches.
[ { "created": "Thu, 2 May 2013 17:03:58 GMT", "version": "v1" } ]
2013-05-03
[ [ "Jin", "Ruoming", "" ], [ "Shen", "Yelong", "" ], [ "Liu", "Lin", "" ], [ "Chen", "Xue-wen", "" ] ]
In this work, we study a basic and practically important strategy to help prevent and/or delay an outbreak in the context of network: limiting the contact between individuals. In this paper, we introduce the average neighborhood size as a new measure for the degree of being small-world and utilize it to formally define the desmall- world network problem. We also prove the NP-hardness of the general reachable pair cut problem and propose a greedy edge betweenness based approach as the benchmark in selecting the candidate edges for solving our problem. Furthermore, we transform the de-small-world network problem as an OR-AND Boolean function maximization problem, which is also an NP-hardness problem. In addition, we develop a numerical relaxation approach to solve the Boolean function maximization and the de-small-world problem. Also, we introduce the short-betweenness, which measures the edge importance in terms of all short paths with distance no greater than a certain threshold, and utilize it to speed up our numerical relaxation approach. The experimental evaluation demonstrates the effectiveness and efficiency of our approaches.
2405.07326
Mahmood Ahmadi
Amirhossein Shahrokhi and Mahmood Ahmadi
Power Evaluation of IOT Application Layer Protocols
null
null
null
null
cs.NI
http://creativecommons.org/licenses/by-sa/4.0/
The Internet of Things has affected all aspects of daily life, and the number of IoT devices is increasing day by day. According to forecasts, the number of Internet of Things devices will reach one trillion devices by 2035. The increase in the number of devices connected to the Internet will cause various concerns. One of the most important concerns is the energy and power consumption of these devices. Although Internet of Things modules are low in energy consumption, their widespread and large-scale use has made the issue of power consumption become the most important challenge in this field. For this reason, it is necessary to use communication protocols that, in addition to establishing efficient communication, impose minimal power consumption on the network. In this paper, application layer protocols such as MQTT, MQTT-SN, CoAP, and HTTP are simulated using the tools available in the Contiki operating system, including COOJA and Powertrace, and they { are evaluated} and compared with each other in terms of power consumption. According to the simulations performed by the mentioned tools, the MQTT-SN protocol was the least consuming protocol in terms of power consumption. After that, the CoAP protocol is placed, and with a slight difference, the MQTT protocol, which consumes more than MQTT-SN. Finally, the HTTP protocol consumes the most power, which makes it unsuitable for communication in the Internet of Things
[ { "created": "Sun, 12 May 2024 16:23:52 GMT", "version": "v1" } ]
2024-05-14
[ [ "Shahrokhi", "Amirhossein", "" ], [ "Ahmadi", "Mahmood", "" ] ]
The Internet of Things has affected all aspects of daily life, and the number of IoT devices is increasing day by day. According to forecasts, the number of Internet of Things devices will reach one trillion devices by 2035. The increase in the number of devices connected to the Internet will cause various concerns. One of the most important concerns is the energy and power consumption of these devices. Although Internet of Things modules are low in energy consumption, their widespread and large-scale use has made the issue of power consumption become the most important challenge in this field. For this reason, it is necessary to use communication protocols that, in addition to establishing efficient communication, impose minimal power consumption on the network. In this paper, application layer protocols such as MQTT, MQTT-SN, CoAP, and HTTP are simulated using the tools available in the Contiki operating system, including COOJA and Powertrace, and they { are evaluated} and compared with each other in terms of power consumption. According to the simulations performed by the mentioned tools, the MQTT-SN protocol was the least consuming protocol in terms of power consumption. After that, the CoAP protocol is placed, and with a slight difference, the MQTT protocol, which consumes more than MQTT-SN. Finally, the HTTP protocol consumes the most power, which makes it unsuitable for communication in the Internet of Things
1703.01619
Graham Neubig
Graham Neubig
Neural Machine Translation and Sequence-to-sequence Models: A Tutorial
65 Pages
null
null
null
cs.CL cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This tutorial introduces a new and powerful set of techniques variously called "neural machine translation" or "neural sequence-to-sequence models". These techniques have been used in a number of tasks regarding the handling of human language, and can be a powerful tool in the toolbox of anyone who wants to model sequential data of some sort. The tutorial assumes that the reader knows the basics of math and programming, but does not assume any particular experience with neural networks or natural language processing. It attempts to explain the intuition behind the various methods covered, then delves into them with enough mathematical detail to understand them concretely, and culiminates with a suggestion for an implementation exercise, where readers can test that they understood the content in practice.
[ { "created": "Sun, 5 Mar 2017 16:10:11 GMT", "version": "v1" } ]
2017-03-07
[ [ "Neubig", "Graham", "" ] ]
This tutorial introduces a new and powerful set of techniques variously called "neural machine translation" or "neural sequence-to-sequence models". These techniques have been used in a number of tasks regarding the handling of human language, and can be a powerful tool in the toolbox of anyone who wants to model sequential data of some sort. The tutorial assumes that the reader knows the basics of math and programming, but does not assume any particular experience with neural networks or natural language processing. It attempts to explain the intuition behind the various methods covered, then delves into them with enough mathematical detail to understand them concretely, and culiminates with a suggestion for an implementation exercise, where readers can test that they understood the content in practice.
1105.3259
Frank Nielsen
Frank Nielsen and Richard Nock
On R\'enyi and Tsallis entropies and divergences for exponential families
7 pages
Journal of Physics A: Mathematical and Theoretical, Volume 45 Number 3, 2012
10.1088/1751-8113/45/3/032003
null
cs.IT cs.LG math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many common probability distributions in statistics like the Gaussian, multinomial, Beta or Gamma distributions can be studied under the unified framework of exponential families. In this paper, we prove that both R\'enyi and Tsallis divergences of distributions belonging to the same exponential family admit a generic closed form expression. Furthermore, we show that R\'enyi and Tsallis entropies can also be calculated in closed-form for sub-families including the Gaussian or exponential distributions, among others.
[ { "created": "Tue, 17 May 2011 02:05:32 GMT", "version": "v1" } ]
2012-02-01
[ [ "Nielsen", "Frank", "" ], [ "Nock", "Richard", "" ] ]
Many common probability distributions in statistics like the Gaussian, multinomial, Beta or Gamma distributions can be studied under the unified framework of exponential families. In this paper, we prove that both R\'enyi and Tsallis divergences of distributions belonging to the same exponential family admit a generic closed form expression. Furthermore, we show that R\'enyi and Tsallis entropies can also be calculated in closed-form for sub-families including the Gaussian or exponential distributions, among others.
1408.3639
Ye Liang
Ye Liang
Solving Polynomial Equations with Equation Constraints: the Zero-dimensional Case
null
null
null
null
cs.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A zero-dimensional polynomial ideal may have a lot of complex zeros. But sometimes, only some of them are needed. In this paper, for a zero-dimensional ideal $I$, we study its complex zeros that locate in another variety $\textbf{V}(J)$ where $J$ is an arbitrary ideal. The main problem is that for a point in $\textbf{V}(I) \cap \textbf{V}(J)=\textbf{V}(I+J)$, its multiplicities w.r.t. $I$ and $I+J$ may be different. Therefore, we cannot get the multiplicity of this point w.r.t. $I$ by studying $I + J$. A straightforward way is that first compute the points of $\textbf{V}(I + J)$, then study their multiplicities w.r.t. $I$. But the former step is difficult to realize exactly. In this paper, we propose a natural geometric explanation of the localization of a polynomial ring corresponding to a semigroup order. Then, based on this view, using the standard basis method and the border basis method, we introduce a way to compute the complex zeros of $I$ in $\textbf{V}(J)$ with their multiplicities w.r.t. $I$. As an application, we compute the sum of Milnor numbers of the singular points on a polynomial hypersurface and work out all the singular points on the hypersurface with their Milnor numbers.
[ { "created": "Fri, 15 Aug 2014 20:03:21 GMT", "version": "v1" } ]
2014-08-19
[ [ "Liang", "Ye", "" ] ]
A zero-dimensional polynomial ideal may have a lot of complex zeros. But sometimes, only some of them are needed. In this paper, for a zero-dimensional ideal $I$, we study its complex zeros that locate in another variety $\textbf{V}(J)$ where $J$ is an arbitrary ideal. The main problem is that for a point in $\textbf{V}(I) \cap \textbf{V}(J)=\textbf{V}(I+J)$, its multiplicities w.r.t. $I$ and $I+J$ may be different. Therefore, we cannot get the multiplicity of this point w.r.t. $I$ by studying $I + J$. A straightforward way is that first compute the points of $\textbf{V}(I + J)$, then study their multiplicities w.r.t. $I$. But the former step is difficult to realize exactly. In this paper, we propose a natural geometric explanation of the localization of a polynomial ring corresponding to a semigroup order. Then, based on this view, using the standard basis method and the border basis method, we introduce a way to compute the complex zeros of $I$ in $\textbf{V}(J)$ with their multiplicities w.r.t. $I$. As an application, we compute the sum of Milnor numbers of the singular points on a polynomial hypersurface and work out all the singular points on the hypersurface with their Milnor numbers.
1407.4056
Dinesh Ramasamy
Dinesh Ramasamy and Upamanyu Madhow
Scalable and Efficient Geographic Routing in Mobile Ad Hoc Wireless Networks
IEEE Transactions on Information Theory
null
null
null
cs.NI cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose and evaluate a scalable position-publish and an accompanying routing protocol which is efficient despite operating with imperfect information regarding the destination's location. The traffic generated by our position-publish protocol fits within the transport capacity of large mobile ad hoc networks (MANETs) with constant communication bandwidth allocated for routing overhead, even as the network size increases. The routing protocol guarantees, with high probability, routes whose lengths are within a constant "stretch" factor of the shortest path from source to destination. The key idea underlying the scalability of the publish protocol is for each potential destination node to send location updates (with frequency decaying with distance) to a subset of network nodes, structured as annular regions around it (the natural approach of updating circular regions in distance-dependent fashion does not scale). The routing protocol must therefore account for the fact that the source and/or relay nodes may not have estimates of the destination's location (or may have stale estimates). Spatial and temporal scaling of protocol parameters are chosen so as to guarantee scalability, route reliability and route stretch, and these analytical design prescriptions are verified using simulations.
[ { "created": "Tue, 15 Jul 2014 16:55:19 GMT", "version": "v1" } ]
2014-07-16
[ [ "Ramasamy", "Dinesh", "" ], [ "Madhow", "Upamanyu", "" ] ]
We propose and evaluate a scalable position-publish and an accompanying routing protocol which is efficient despite operating with imperfect information regarding the destination's location. The traffic generated by our position-publish protocol fits within the transport capacity of large mobile ad hoc networks (MANETs) with constant communication bandwidth allocated for routing overhead, even as the network size increases. The routing protocol guarantees, with high probability, routes whose lengths are within a constant "stretch" factor of the shortest path from source to destination. The key idea underlying the scalability of the publish protocol is for each potential destination node to send location updates (with frequency decaying with distance) to a subset of network nodes, structured as annular regions around it (the natural approach of updating circular regions in distance-dependent fashion does not scale). The routing protocol must therefore account for the fact that the source and/or relay nodes may not have estimates of the destination's location (or may have stale estimates). Spatial and temporal scaling of protocol parameters are chosen so as to guarantee scalability, route reliability and route stretch, and these analytical design prescriptions are verified using simulations.
2407.02651
Majeed Kazemitabaar
Majeed Kazemitabaar, Jack Williams, Ian Drosos, Tovi Grossman, Austin Henley, Carina Negreanu, Advait Sarkar
Improving Steering and Verification in AI-Assisted Data Analysis with Interactive Task Decomposition
Published at UIST 2024; 19 pages, 9 figures, and 2 tables
Proceedings of the 37th Annual ACM Symposium on User Interface Software and Technology (UIST 2024)
10.1145/3654777.3676345
null
cs.HC cs.AI
http://creativecommons.org/licenses/by/4.0/
LLM-powered tools like ChatGPT Data Analysis, have the potential to help users tackle the challenging task of data analysis programming, which requires expertise in data processing, programming, and statistics. However, our formative study (n=15) uncovered serious challenges in verifying AI-generated results and steering the AI (i.e., guiding the AI system to produce the desired output). We developed two contrasting approaches to address these challenges. The first (Stepwise) decomposes the problem into step-by-step subgoals with pairs of editable assumptions and code until task completion, while the second (Phasewise) decomposes the entire problem into three editable, logical phases: structured input/output assumptions, execution plan, and code. A controlled, within-subjects experiment (n=18) compared these systems against a conversational baseline. Users reported significantly greater control with the Stepwise and Phasewise systems, and found intervention, correction, and verification easier, compared to the baseline. The results suggest design guidelines and trade-offs for AI-assisted data analysis tools.
[ { "created": "Tue, 2 Jul 2024 20:33:50 GMT", "version": "v1" }, { "created": "Thu, 1 Aug 2024 15:56:00 GMT", "version": "v2" } ]
2024-08-02
[ [ "Kazemitabaar", "Majeed", "" ], [ "Williams", "Jack", "" ], [ "Drosos", "Ian", "" ], [ "Grossman", "Tovi", "" ], [ "Henley", "Austin", "" ], [ "Negreanu", "Carina", "" ], [ "Sarkar", "Advait", "" ] ]
LLM-powered tools like ChatGPT Data Analysis, have the potential to help users tackle the challenging task of data analysis programming, which requires expertise in data processing, programming, and statistics. However, our formative study (n=15) uncovered serious challenges in verifying AI-generated results and steering the AI (i.e., guiding the AI system to produce the desired output). We developed two contrasting approaches to address these challenges. The first (Stepwise) decomposes the problem into step-by-step subgoals with pairs of editable assumptions and code until task completion, while the second (Phasewise) decomposes the entire problem into three editable, logical phases: structured input/output assumptions, execution plan, and code. A controlled, within-subjects experiment (n=18) compared these systems against a conversational baseline. Users reported significantly greater control with the Stepwise and Phasewise systems, and found intervention, correction, and verification easier, compared to the baseline. The results suggest design guidelines and trade-offs for AI-assisted data analysis tools.
2304.07689
Zhiyuan Li
Zhiyuan Li, Ziru Liu, Anna Zou, Anca L. Ralescu
Learning Empirical Bregman Divergence for Uncertain Distance Representation
Accepted by IEEE FUSION 2023
null
10.23919/FUSION52260.2023.10224080
null
cs.CV cs.AI cs.IT cs.LG math.IT stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep metric learning techniques have been used for visual representation in various supervised and unsupervised learning tasks through learning embeddings of samples with deep networks. However, classic approaches, which employ a fixed distance metric as a similarity function between two embeddings, may lead to suboptimal performance for capturing the complex data distribution. The Bregman divergence generalizes measures of various distance metrics and arises throughout many fields of deep metric learning. In this paper, we first show how deep metric learning loss can arise from the Bregman divergence. We then introduce a novel method for learning empirical Bregman divergence directly from data based on parameterizing the convex function underlying the Bregman divergence with a deep learning setting. We further experimentally show that our approach performs effectively on five popular public datasets compared to other SOTA deep metric learning methods, particularly for pattern recognition problems.
[ { "created": "Sun, 16 Apr 2023 04:16:28 GMT", "version": "v1" }, { "created": "Tue, 18 Apr 2023 01:22:50 GMT", "version": "v2" }, { "created": "Mon, 15 May 2023 16:38:23 GMT", "version": "v3" } ]
2023-08-30
[ [ "Li", "Zhiyuan", "" ], [ "Liu", "Ziru", "" ], [ "Zou", "Anna", "" ], [ "Ralescu", "Anca L.", "" ] ]
Deep metric learning techniques have been used for visual representation in various supervised and unsupervised learning tasks through learning embeddings of samples with deep networks. However, classic approaches, which employ a fixed distance metric as a similarity function between two embeddings, may lead to suboptimal performance for capturing the complex data distribution. The Bregman divergence generalizes measures of various distance metrics and arises throughout many fields of deep metric learning. In this paper, we first show how deep metric learning loss can arise from the Bregman divergence. We then introduce a novel method for learning empirical Bregman divergence directly from data based on parameterizing the convex function underlying the Bregman divergence with a deep learning setting. We further experimentally show that our approach performs effectively on five popular public datasets compared to other SOTA deep metric learning methods, particularly for pattern recognition problems.
2205.13190
Haitao Lin
Haitao Lin, Junnan Zhu, Lu Xiang, Yu Zhou, Jiajun Zhang, Chengqing Zong
Other Roles Matter! Enhancing Role-Oriented Dialogue Summarization via Role Interactions
Accepted by ACL 2022 main conference
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Role-oriented dialogue summarization is to generate summaries for different roles in the dialogue, e.g., merchants and consumers. Existing methods handle this task by summarizing each role's content separately and thus are prone to ignore the information from other roles. However, we believe that other roles' content could benefit the quality of summaries, such as the omitted information mentioned by other roles. Therefore, we propose a novel role interaction enhanced method for role-oriented dialogue summarization. It adopts cross attention and decoder self-attention interactions to interactively acquire other roles' critical information. The cross attention interaction aims to select other roles' critical dialogue utterances, while the decoder self-attention interaction aims to obtain key information from other roles' summaries. Experimental results have shown that our proposed method significantly outperforms strong baselines on two public role-oriented dialogue summarization datasets. Extensive analyses have demonstrated that other roles' content could help generate summaries with more complete semantics and correct topic structures.
[ { "created": "Thu, 26 May 2022 06:58:02 GMT", "version": "v1" } ]
2022-05-27
[ [ "Lin", "Haitao", "" ], [ "Zhu", "Junnan", "" ], [ "Xiang", "Lu", "" ], [ "Zhou", "Yu", "" ], [ "Zhang", "Jiajun", "" ], [ "Zong", "Chengqing", "" ] ]
Role-oriented dialogue summarization is to generate summaries for different roles in the dialogue, e.g., merchants and consumers. Existing methods handle this task by summarizing each role's content separately and thus are prone to ignore the information from other roles. However, we believe that other roles' content could benefit the quality of summaries, such as the omitted information mentioned by other roles. Therefore, we propose a novel role interaction enhanced method for role-oriented dialogue summarization. It adopts cross attention and decoder self-attention interactions to interactively acquire other roles' critical information. The cross attention interaction aims to select other roles' critical dialogue utterances, while the decoder self-attention interaction aims to obtain key information from other roles' summaries. Experimental results have shown that our proposed method significantly outperforms strong baselines on two public role-oriented dialogue summarization datasets. Extensive analyses have demonstrated that other roles' content could help generate summaries with more complete semantics and correct topic structures.
2406.19878
Ruben Interian
Ruben Interian
A political radicalization framework based on Moral Foundations Theory
null
null
null
null
cs.SI physics.soc-ph
http://creativecommons.org/licenses/by-nc-nd/4.0/
Moral Foundations Theory proposes that individuals with conflicting political views base their behavior on different principles chosen from a small group of universal moral foundations. This study proposes using a set of widely accepted moral foundations (Fairness, Ingroup loyalty, Authority, and Purity) as proxies to determine the degree of radicalization of online communities. The fifth principle, Care, is generally surpassed by others, which are higher in the radicalized groups' moral hierarchy. Moreover, the presented data-driven methodological framework proposes an alternative way to measure whether a community complies with some moral principle or foundation: not evaluating its speech, but its behavior through interactions of its individuals, establishing a bridge between structural features of the interaction network and the intensity of communities' radicalization regarding the considered moral foundations. Two foundations may be assessed using the network's structural characteristics: Ingroup loyalty measured by group-level modularity, and Authority evaluated using group domination for detecting potential hierarchical substructures within the network. By analyzing the set of Pareto-optimal groups regarding a multidimensional moral relevance scale, the most radicalized communities are identified among those considered extreme in some of their attitudes or views. The application of the proposed framework is illustrated using real-world datasets. The radicalized communities' behavior exhibits increasing isolation, and its authorities and leaders show growing domination over their audience. There were also detected differences between users' behavior and speech, showing that individuals tend to share more 'extreme' ingroup content than that they publish: extreme views get more likes on social media.
[ { "created": "Fri, 28 Jun 2024 12:36:06 GMT", "version": "v1" } ]
2024-07-01
[ [ "Interian", "Ruben", "" ] ]
Moral Foundations Theory proposes that individuals with conflicting political views base their behavior on different principles chosen from a small group of universal moral foundations. This study proposes using a set of widely accepted moral foundations (Fairness, Ingroup loyalty, Authority, and Purity) as proxies to determine the degree of radicalization of online communities. The fifth principle, Care, is generally surpassed by others, which are higher in the radicalized groups' moral hierarchy. Moreover, the presented data-driven methodological framework proposes an alternative way to measure whether a community complies with some moral principle or foundation: not evaluating its speech, but its behavior through interactions of its individuals, establishing a bridge between structural features of the interaction network and the intensity of communities' radicalization regarding the considered moral foundations. Two foundations may be assessed using the network's structural characteristics: Ingroup loyalty measured by group-level modularity, and Authority evaluated using group domination for detecting potential hierarchical substructures within the network. By analyzing the set of Pareto-optimal groups regarding a multidimensional moral relevance scale, the most radicalized communities are identified among those considered extreme in some of their attitudes or views. The application of the proposed framework is illustrated using real-world datasets. The radicalized communities' behavior exhibits increasing isolation, and its authorities and leaders show growing domination over their audience. There were also detected differences between users' behavior and speech, showing that individuals tend to share more 'extreme' ingroup content than that they publish: extreme views get more likes on social media.
2403.09176
Byeongjun Park
Byeongjun Park, Hyojun Go, Jin-Young Kim, Sangmin Woo, Seokil Ham, Changick Kim
Switch Diffusion Transformer: Synergizing Denoising Tasks with Sparse Mixture-of-Experts
Project Page: https://byeongjun-park.github.io/Switch-DiT/
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Diffusion models have achieved remarkable success across a range of generative tasks. Recent efforts to enhance diffusion model architectures have reimagined them as a form of multi-task learning, where each task corresponds to a denoising task at a specific noise level. While these efforts have focused on parameter isolation and task routing, they fall short of capturing detailed inter-task relationships and risk losing semantic information, respectively. In response, we introduce Switch Diffusion Transformer (Switch-DiT), which establishes inter-task relationships between conflicting tasks without compromising semantic information. To achieve this, we employ a sparse mixture-of-experts within each transformer block to utilize semantic information and facilitate handling conflicts in tasks through parameter isolation. Additionally, we propose a diffusion prior loss, encouraging similar tasks to share their denoising paths while isolating conflicting ones. Through these, each transformer block contains a shared expert across all tasks, where the common and task-specific denoising paths enable the diffusion model to construct its beneficial way of synergizing denoising tasks. Extensive experiments validate the effectiveness of our approach in improving both image quality and convergence rate, and further analysis demonstrates that Switch-DiT constructs tailored denoising paths across various generation scenarios.
[ { "created": "Thu, 14 Mar 2024 08:43:43 GMT", "version": "v1" }, { "created": "Wed, 10 Jul 2024 07:39:08 GMT", "version": "v2" } ]
2024-07-11
[ [ "Park", "Byeongjun", "" ], [ "Go", "Hyojun", "" ], [ "Kim", "Jin-Young", "" ], [ "Woo", "Sangmin", "" ], [ "Ham", "Seokil", "" ], [ "Kim", "Changick", "" ] ]
Diffusion models have achieved remarkable success across a range of generative tasks. Recent efforts to enhance diffusion model architectures have reimagined them as a form of multi-task learning, where each task corresponds to a denoising task at a specific noise level. While these efforts have focused on parameter isolation and task routing, they fall short of capturing detailed inter-task relationships and risk losing semantic information, respectively. In response, we introduce Switch Diffusion Transformer (Switch-DiT), which establishes inter-task relationships between conflicting tasks without compromising semantic information. To achieve this, we employ a sparse mixture-of-experts within each transformer block to utilize semantic information and facilitate handling conflicts in tasks through parameter isolation. Additionally, we propose a diffusion prior loss, encouraging similar tasks to share their denoising paths while isolating conflicting ones. Through these, each transformer block contains a shared expert across all tasks, where the common and task-specific denoising paths enable the diffusion model to construct its beneficial way of synergizing denoising tasks. Extensive experiments validate the effectiveness of our approach in improving both image quality and convergence rate, and further analysis demonstrates that Switch-DiT constructs tailored denoising paths across various generation scenarios.
1509.04252
Andreas Kreienbuehl
Andreas Kreienbuehl and Arne Naegel and Daniel Ruprecht and Andreas Vogel and Gabriel Wittum and Rolf Krause
Parareal convergence for 2D unsteady flow around a cylinder
16 pages, 7 figures
null
null
null
cs.CE cs.DC cs.NA math.NA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this technical report we study the convergence of Parareal for 2D incompressible flow around a cylinder for different viscosities. Two methods are used as fine integrator: backward Euler and a fractional step method. It is found that Parareal converges better for the implicit Euler, likely because it under-resolves the fine-scale dynamics as a result of numerical diffusion.
[ { "created": "Mon, 14 Sep 2015 19:29:41 GMT", "version": "v1" } ]
2015-09-15
[ [ "Kreienbuehl", "Andreas", "" ], [ "Naegel", "Arne", "" ], [ "Ruprecht", "Daniel", "" ], [ "Vogel", "Andreas", "" ], [ "Wittum", "Gabriel", "" ], [ "Krause", "Rolf", "" ] ]
In this technical report we study the convergence of Parareal for 2D incompressible flow around a cylinder for different viscosities. Two methods are used as fine integrator: backward Euler and a fractional step method. It is found that Parareal converges better for the implicit Euler, likely because it under-resolves the fine-scale dynamics as a result of numerical diffusion.
2202.13026
Flash Sheridan
Flash Sheridan
Static Analysis Deployment Pitfalls
null
Supplemental Proceedings of the 21st IEEE International Symposium on Software Reliability Engineering, November 2010
null
null
cs.SE cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Organizational, political, and configuration mistakes in the deployment of a static source code analysis tool within a software development organization can result in most of the value of the tool being lost, even while apparently meeting management goals. A list of pitfalls encountered as a static analysis consultant is presented, with discussion of techniques for avoiding or mitigating them. This is part of a work in progress, tentatively entitled "Handbook of Static Analysis Deployment."
[ { "created": "Sat, 26 Feb 2022 01:01:08 GMT", "version": "v1" } ]
2022-03-01
[ [ "Sheridan", "Flash", "" ] ]
Organizational, political, and configuration mistakes in the deployment of a static source code analysis tool within a software development organization can result in most of the value of the tool being lost, even while apparently meeting management goals. A list of pitfalls encountered as a static analysis consultant is presented, with discussion of techniques for avoiding or mitigating them. This is part of a work in progress, tentatively entitled "Handbook of Static Analysis Deployment."
2305.13913
Yun Li
Yun Li, Hongwei Liu, Sihem Mesnager
Constructions of Constant Dimension Subspace Codes
This article was submitted to Designs, Codes and Cryptography on November 22nd, 2022
null
null
null
cs.IT math.CO math.IT
http://creativecommons.org/licenses/by/4.0/
Subspace codes have important applications in random network coding. It is interesting to construct subspace codes with both sizes, and the minimum distances are as large as possible. In particular, cyclic constant dimension subspaces codes have additional properties which can be used to make encoding and decoding more efficient. In this paper, we construct large cyclic constant dimension subspace codes with minimum distances $2k-2$ and $2k$. These codes are contained in $\mathcal{G}_q(n, k)$, where $\mathcal{G}_q(n, k)$ denotes the set of all $k$-dimensional subspaces of $\mathbb{F}_{q^n}$. Consequently, some results in \cite{FW}, \cite{NXG}, and \cite{ZT} are extended.
[ { "created": "Tue, 23 May 2023 10:37:00 GMT", "version": "v1" } ]
2023-05-24
[ [ "Li", "Yun", "" ], [ "Liu", "Hongwei", "" ], [ "Mesnager", "Sihem", "" ] ]
Subspace codes have important applications in random network coding. It is interesting to construct subspace codes with both sizes, and the minimum distances are as large as possible. In particular, cyclic constant dimension subspaces codes have additional properties which can be used to make encoding and decoding more efficient. In this paper, we construct large cyclic constant dimension subspace codes with minimum distances $2k-2$ and $2k$. These codes are contained in $\mathcal{G}_q(n, k)$, where $\mathcal{G}_q(n, k)$ denotes the set of all $k$-dimensional subspaces of $\mathbb{F}_{q^n}$. Consequently, some results in \cite{FW}, \cite{NXG}, and \cite{ZT} are extended.
1704.02789
Mahardhika Pratama Dr
Mahardhika Pratama, Plamen P. Angelov, Edwin Lughofer
Parsimonious Random Vector Functional Link Network for Data Streams
this paper is submitted for publication in Information Sciences
null
10.1016/j.ins.2017.11.050
null
cs.NE cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The theory of random vector functional link network (RVFLN) has provided a breakthrough in the design of neural networks (NNs) since it conveys solid theoretical justification of randomized learning. Existing works in RVFLN are hardly scalable for data stream analytics because they are inherent to the issue of complexity as a result of the absence of structural learning scenarios. A novel class of RVLFN, namely parsimonious random vector functional link network (pRVFLN), is proposed in this paper. pRVFLN features an open structure paradigm where its network structure can be built from scratch and can be automatically generated in accordance with degree of nonlinearity and time-varying property of system being modelled. pRVFLN is equipped with complexity reduction scenarios where inconsequential hidden nodes can be pruned and input features can be dynamically selected. pRVFLN puts into perspective an online active learning mechanism which expedites the training process and relieves operator labelling efforts. In addition, pRVFLN introduces a non-parametric type of hidden node, developed using an interval-valued data cloud. The hidden node completely reflects the real data distribution and is not constrained by a specific shape of the cluster. All learning procedures of pRVFLN follow a strictly single-pass learning mode, which is applicable for an online real-time deployment. The efficacy of pRVFLN was rigorously validated through numerous simulations and comparisons with state-of-the art algorithms where it produced the most encouraging numerical results. Furthermore, the robustness of pRVFLN was investigated and a new conclusion is made to the scope of random parameters where it plays vital role to the success of randomized learning.
[ { "created": "Mon, 10 Apr 2017 10:24:34 GMT", "version": "v1" }, { "created": "Sat, 6 May 2017 11:59:53 GMT", "version": "v2" } ]
2018-02-06
[ [ "Pratama", "Mahardhika", "" ], [ "Angelov", "Plamen P.", "" ], [ "Lughofer", "Edwin", "" ] ]
The theory of random vector functional link network (RVFLN) has provided a breakthrough in the design of neural networks (NNs) since it conveys solid theoretical justification of randomized learning. Existing works in RVFLN are hardly scalable for data stream analytics because they are inherent to the issue of complexity as a result of the absence of structural learning scenarios. A novel class of RVLFN, namely parsimonious random vector functional link network (pRVFLN), is proposed in this paper. pRVFLN features an open structure paradigm where its network structure can be built from scratch and can be automatically generated in accordance with degree of nonlinearity and time-varying property of system being modelled. pRVFLN is equipped with complexity reduction scenarios where inconsequential hidden nodes can be pruned and input features can be dynamically selected. pRVFLN puts into perspective an online active learning mechanism which expedites the training process and relieves operator labelling efforts. In addition, pRVFLN introduces a non-parametric type of hidden node, developed using an interval-valued data cloud. The hidden node completely reflects the real data distribution and is not constrained by a specific shape of the cluster. All learning procedures of pRVFLN follow a strictly single-pass learning mode, which is applicable for an online real-time deployment. The efficacy of pRVFLN was rigorously validated through numerous simulations and comparisons with state-of-the art algorithms where it produced the most encouraging numerical results. Furthermore, the robustness of pRVFLN was investigated and a new conclusion is made to the scope of random parameters where it plays vital role to the success of randomized learning.
2106.00451
Fan Huang
Fan Huang
Highlight Timestamp Detection Model for Comedy Videos via Multimodal Sentiment Analysis
null
null
null
null
cs.CV cs.AI cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Nowadays, the videos on the Internet are prevailing. The precise and in-depth understanding of the videos is a difficult but valuable problem for both platforms and researchers. The existing video understand models do well in object recognition tasks but currently still cannot understand the abstract and contextual features like highlight humor frames in comedy videos. The current industrial works are also mainly focused on the basic category classification task based on the appearances of objects. The feature detection methods for the abstract category remains blank. A data structure that includes the information of video frames, audio spectrum and texts provide a new direction to explore. The multimodal models are proposed to make this in-depth video understanding mission possible. In this paper, we analyze the difficulties in abstract understanding of videos and propose a multimodal structure to obtain state-of-the-art performance in this field. Then we select several benchmarks for multimodal video understanding and apply the most suitable model to find the best performance. At last, we evaluate the overall spotlights and drawbacks of the models and methods in this paper and point out the possible directions for further improvements.
[ { "created": "Fri, 28 May 2021 08:39:19 GMT", "version": "v1" } ]
2021-06-02
[ [ "Huang", "Fan", "" ] ]
Nowadays, the videos on the Internet are prevailing. The precise and in-depth understanding of the videos is a difficult but valuable problem for both platforms and researchers. The existing video understand models do well in object recognition tasks but currently still cannot understand the abstract and contextual features like highlight humor frames in comedy videos. The current industrial works are also mainly focused on the basic category classification task based on the appearances of objects. The feature detection methods for the abstract category remains blank. A data structure that includes the information of video frames, audio spectrum and texts provide a new direction to explore. The multimodal models are proposed to make this in-depth video understanding mission possible. In this paper, we analyze the difficulties in abstract understanding of videos and propose a multimodal structure to obtain state-of-the-art performance in this field. Then we select several benchmarks for multimodal video understanding and apply the most suitable model to find the best performance. At last, we evaluate the overall spotlights and drawbacks of the models and methods in this paper and point out the possible directions for further improvements.
2401.14579
Ying Dai
Kun Fu, and Ying Dai
Recognizing Multiple Ingredients in Food Images Using a Single-Ingredient Classification Model
9 pages, 21 figures, 6 tables
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recognizing food images presents unique challenges due to the variable spatial layout and shape changes of ingredients with different cooking and cutting methods. This study introduces an advanced approach for recognizing ingredients segmented from food images. The method localizes the candidate regions of the ingredients using the locating and sliding window techniques. Then, these regions are assigned into ingredient classes using a CNN (Convolutional Neural Network)-based single-ingredient classification model trained on a dataset of single-ingredient images. To address the challenge of processing speed in multi-ingredient recognition, a novel model pruning method is proposed that enhances the efficiency of the classification model. Subsequently, the multi-ingredient identification is achieved through a decision-making scheme, incorporating two novel algorithms. The single-ingredient image dataset, designed in accordance with the book entitled "New Food Ingredients List FOODS 2021", encompasses 9982 images across 110 diverse categories, emphasizing variety in ingredient shapes. In addition, a multi-ingredient image dataset is developed to rigorously evaluate the performance of our approach. Experimental results validate the effectiveness of our method, particularly highlighting its improved capability in recognizing multiple ingredients. This marks a significant advancement in the field of food image analysis.
[ { "created": "Fri, 26 Jan 2024 00:46:56 GMT", "version": "v1" }, { "created": "Wed, 14 Feb 2024 11:58:59 GMT", "version": "v2" }, { "created": "Mon, 19 Feb 2024 01:43:00 GMT", "version": "v3" } ]
2024-02-20
[ [ "Fu", "Kun", "" ], [ "Dai", "Ying", "" ] ]
Recognizing food images presents unique challenges due to the variable spatial layout and shape changes of ingredients with different cooking and cutting methods. This study introduces an advanced approach for recognizing ingredients segmented from food images. The method localizes the candidate regions of the ingredients using the locating and sliding window techniques. Then, these regions are assigned into ingredient classes using a CNN (Convolutional Neural Network)-based single-ingredient classification model trained on a dataset of single-ingredient images. To address the challenge of processing speed in multi-ingredient recognition, a novel model pruning method is proposed that enhances the efficiency of the classification model. Subsequently, the multi-ingredient identification is achieved through a decision-making scheme, incorporating two novel algorithms. The single-ingredient image dataset, designed in accordance with the book entitled "New Food Ingredients List FOODS 2021", encompasses 9982 images across 110 diverse categories, emphasizing variety in ingredient shapes. In addition, a multi-ingredient image dataset is developed to rigorously evaluate the performance of our approach. Experimental results validate the effectiveness of our method, particularly highlighting its improved capability in recognizing multiple ingredients. This marks a significant advancement in the field of food image analysis.
1212.4931
Anatolii Leukhin Nikolaevich
Anatolii Leukhin, Oscar Moreno and Andrew Tirkel
Secure CDMA Sequences
10 pages, 8 figures
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Single sequences like Legendre have high linear complexity. Known CDMA families of sequences all have low complexities. We present a new method of constructing CDMA sequence sets with the complexity of the Legendre from new frequency hop patterns, and compare them with known sequences. These are the first families whose normalized linear complexities do not asymptote to 0, verified for lengths up to 6x108. The new constructions in array format are also useful in watermarking images. We present a conjecture regarding the recursion polynomials. We also have a method to reverse the process, and from small Kasami/No-Kumar sequences we obtain a new family of 2n doubly periodic (2n+1)x(2n-1) frequency hop patterns with correlation 2.
[ { "created": "Thu, 20 Dec 2012 06:04:04 GMT", "version": "v1" } ]
2012-12-21
[ [ "Leukhin", "Anatolii", "" ], [ "Moreno", "Oscar", "" ], [ "Tirkel", "Andrew", "" ] ]
Single sequences like Legendre have high linear complexity. Known CDMA families of sequences all have low complexities. We present a new method of constructing CDMA sequence sets with the complexity of the Legendre from new frequency hop patterns, and compare them with known sequences. These are the first families whose normalized linear complexities do not asymptote to 0, verified for lengths up to 6x108. The new constructions in array format are also useful in watermarking images. We present a conjecture regarding the recursion polynomials. We also have a method to reverse the process, and from small Kasami/No-Kumar sequences we obtain a new family of 2n doubly periodic (2n+1)x(2n-1) frequency hop patterns with correlation 2.
1904.09046
Laurent Lessard
Laurent Lessard, Peter Seiler
Direct Synthesis of Iterative Algorithms With Bounds on Achievable Worst-Case Convergence Rate
American Control Conference, 2020
null
null
null
cs.SY math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Iterative first-order methods such as gradient descent and its variants are widely used for solving optimization and machine learning problems. There has been recent interest in analytic or numerically efficient methods for computing worst-case performance bounds for such algorithms, for example over the class of strongly convex loss functions. A popular approach is to assume the algorithm has a fixed size (fixed dimension, or memory) and that its structure is parameterized by one or two hyperparameters, for example a learning rate and a momentum parameter. Then, a Lyapunov function is sought to certify robust stability and subsequent optimization can be performed to find optimal hyperparameter tunings. In the present work, we instead fix the constraints that characterize the loss function and apply techniques from robust control synthesis to directly search over algorithms. This approach yields stronger results than those previously available, since the bounds produced hold over algorithms with an arbitrary, but finite, amount of memory rather than just holding for algorithms with a prescribed structure.
[ { "created": "Fri, 19 Apr 2019 01:07:50 GMT", "version": "v1" }, { "created": "Sat, 21 Mar 2020 03:18:18 GMT", "version": "v2" } ]
2020-03-24
[ [ "Lessard", "Laurent", "" ], [ "Seiler", "Peter", "" ] ]
Iterative first-order methods such as gradient descent and its variants are widely used for solving optimization and machine learning problems. There has been recent interest in analytic or numerically efficient methods for computing worst-case performance bounds for such algorithms, for example over the class of strongly convex loss functions. A popular approach is to assume the algorithm has a fixed size (fixed dimension, or memory) and that its structure is parameterized by one or two hyperparameters, for example a learning rate and a momentum parameter. Then, a Lyapunov function is sought to certify robust stability and subsequent optimization can be performed to find optimal hyperparameter tunings. In the present work, we instead fix the constraints that characterize the loss function and apply techniques from robust control synthesis to directly search over algorithms. This approach yields stronger results than those previously available, since the bounds produced hold over algorithms with an arbitrary, but finite, amount of memory rather than just holding for algorithms with a prescribed structure.
2309.09609
Enzo Rucci
Manuel Costanzo, Enzo Rucci, Carlos Garc\'ia S\'anchez, Marcelo Naiouf, Manuel Prieto-Mat\'ias
Comparing Performance and Portability between CUDA and SYCL for Protein Database Search on NVIDIA, AMD, and Intel GPUs
This article was accepted for publication in 2023 IEEE 35th International Symposium on Computer Architecture and High Performance Computing (SBAC-PAD)
null
10.1109/SBAC-PAD59825.2023.00023
null
cs.PL
http://creativecommons.org/licenses/by-nc-sa/4.0/
The heterogeneous computing paradigm has led to the need for portable and efficient programming solutions that can leverage the capabilities of various hardware devices, such as NVIDIA, Intel, and AMD GPUs. This study evaluates the portability and performance of the SYCL and CUDA languages for one fundamental bioinformatics application (Smith-Waterman protein database search) across different GPU architectures, considering single and multi-GPU configurations from different vendors. The experimental work showed that, while both CUDA and SYCL versions achieve similar performance on NVIDIA devices, the latter demonstrated remarkable code portability to other GPU architectures, such as AMD and Intel. Furthermore, the architectural efficiency rates achieved on these devices were superior in 3 of the 4 cases tested. This brief study highlights the potential of SYCL as a viable solution for achieving both performance and portability in the heterogeneous computing ecosystem.
[ { "created": "Mon, 18 Sep 2023 09:26:46 GMT", "version": "v1" }, { "created": "Fri, 10 Nov 2023 12:11:08 GMT", "version": "v2" } ]
2023-11-13
[ [ "Costanzo", "Manuel", "" ], [ "Rucci", "Enzo", "" ], [ "Sánchez", "Carlos García", "" ], [ "Naiouf", "Marcelo", "" ], [ "Prieto-Matías", "Manuel", "" ] ]
The heterogeneous computing paradigm has led to the need for portable and efficient programming solutions that can leverage the capabilities of various hardware devices, such as NVIDIA, Intel, and AMD GPUs. This study evaluates the portability and performance of the SYCL and CUDA languages for one fundamental bioinformatics application (Smith-Waterman protein database search) across different GPU architectures, considering single and multi-GPU configurations from different vendors. The experimental work showed that, while both CUDA and SYCL versions achieve similar performance on NVIDIA devices, the latter demonstrated remarkable code portability to other GPU architectures, such as AMD and Intel. Furthermore, the architectural efficiency rates achieved on these devices were superior in 3 of the 4 cases tested. This brief study highlights the potential of SYCL as a viable solution for achieving both performance and portability in the heterogeneous computing ecosystem.
2201.12023
Zhuohan Li
Lianmin Zheng, Zhuohan Li, Hao Zhang, Yonghao Zhuang, Zhifeng Chen, Yanping Huang, Yida Wang, Yuanzhong Xu, Danyang Zhuo, Eric P. Xing, Joseph E. Gonzalez, Ion Stoica
Alpa: Automating Inter- and Intra-Operator Parallelism for Distributed Deep Learning
OSDI 2022
null
null
null
cs.LG cs.DC cs.PL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Alpa automates model-parallel training of large deep learning (DL) models by generating execution plans that unify data, operator, and pipeline parallelism. Existing model-parallel training systems either require users to manually create a parallelization plan or automatically generate one from a limited space of model parallelism configurations. They do not suffice to scale out complex DL models on distributed compute devices. Alpa distributes the training of large DL models by viewing parallelisms as two hierarchical levels: inter-operator and intra-operator parallelisms. Based on it, Alpa constructs a new hierarchical space for massive model-parallel execution plans. Alpa designs a number of compilation passes to automatically derive efficient parallel execution plans at each parallelism level. Alpa implements an efficient runtime to orchestrate the two-level parallel execution on distributed compute devices. Our evaluation shows Alpa generates parallelization plans that match or outperform hand-tuned model-parallel training systems even on models they are designed for. Unlike specialized systems, Alpa also generalizes to models with heterogeneous architectures and models without manually-designed plans. Alpa's source code is publicly available at https://github.com/alpa-projects/alpa
[ { "created": "Fri, 28 Jan 2022 10:13:35 GMT", "version": "v1" }, { "created": "Fri, 3 Jun 2022 09:18:24 GMT", "version": "v2" }, { "created": "Tue, 28 Jun 2022 19:36:44 GMT", "version": "v3" } ]
2022-06-30
[ [ "Zheng", "Lianmin", "" ], [ "Li", "Zhuohan", "" ], [ "Zhang", "Hao", "" ], [ "Zhuang", "Yonghao", "" ], [ "Chen", "Zhifeng", "" ], [ "Huang", "Yanping", "" ], [ "Wang", "Yida", "" ], [ "Xu", "Yuanzhong", "" ], [ "Zhuo", "Danyang", "" ], [ "Xing", "Eric P.", "" ], [ "Gonzalez", "Joseph E.", "" ], [ "Stoica", "Ion", "" ] ]
Alpa automates model-parallel training of large deep learning (DL) models by generating execution plans that unify data, operator, and pipeline parallelism. Existing model-parallel training systems either require users to manually create a parallelization plan or automatically generate one from a limited space of model parallelism configurations. They do not suffice to scale out complex DL models on distributed compute devices. Alpa distributes the training of large DL models by viewing parallelisms as two hierarchical levels: inter-operator and intra-operator parallelisms. Based on it, Alpa constructs a new hierarchical space for massive model-parallel execution plans. Alpa designs a number of compilation passes to automatically derive efficient parallel execution plans at each parallelism level. Alpa implements an efficient runtime to orchestrate the two-level parallel execution on distributed compute devices. Our evaluation shows Alpa generates parallelization plans that match or outperform hand-tuned model-parallel training systems even on models they are designed for. Unlike specialized systems, Alpa also generalizes to models with heterogeneous architectures and models without manually-designed plans. Alpa's source code is publicly available at https://github.com/alpa-projects/alpa
2307.02591
Sunjae Kwon
Sunjae Kwon, Xun Wang, Weisong Liu, Emily Druhl, Minhee L. Sung, Joel I. Reisman, Wenjun Li, Robert D. Kerns, William Becker, Hong Yu
ODD: A Benchmark Dataset for the Natural Language Processing based Opioid Related Aberrant Behavior Detection
To be appeared at NAACL 2024
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
Opioid related aberrant behaviors (ORABs) present novel risk factors for opioid overdose. This paper introduces a novel biomedical natural language processing benchmark dataset named ODD, for ORAB Detection Dataset. ODD is an expert-annotated dataset designed to identify ORABs from patients' EHR notes and classify them into nine categories; 1) Confirmed Aberrant Behavior, 2) Suggested Aberrant Behavior, 3) Opioids, 4) Indication, 5) Diagnosed opioid dependency, 6) Benzodiazepines, 7) Medication Changes, 8) Central Nervous System-related, and 9) Social Determinants of Health. We explored two state-of-the-art natural language processing models (fine-tuning and prompt-tuning approaches) to identify ORAB. Experimental results show that the prompt-tuning models outperformed the fine-tuning models in most categories and the gains were especially higher among uncommon categories (Suggested Aberrant Behavior, Confirmed Aberrant Behaviors, Diagnosed Opioid Dependence, and Medication Change). Although the best model achieved the highest 88.17% on macro average area under precision recall curve, uncommon classes still have a large room for performance improvement. ODD is publicly available.
[ { "created": "Wed, 5 Jul 2023 18:41:29 GMT", "version": "v1" }, { "created": "Mon, 24 Jul 2023 00:47:23 GMT", "version": "v2" }, { "created": "Thu, 15 Feb 2024 17:40:03 GMT", "version": "v3" }, { "created": "Fri, 22 Mar 2024 20:01:04 GMT", "version": "v4" } ]
2024-03-26
[ [ "Kwon", "Sunjae", "" ], [ "Wang", "Xun", "" ], [ "Liu", "Weisong", "" ], [ "Druhl", "Emily", "" ], [ "Sung", "Minhee L.", "" ], [ "Reisman", "Joel I.", "" ], [ "Li", "Wenjun", "" ], [ "Kerns", "Robert D.", "" ], [ "Becker", "William", "" ], [ "Yu", "Hong", "" ] ]
Opioid related aberrant behaviors (ORABs) present novel risk factors for opioid overdose. This paper introduces a novel biomedical natural language processing benchmark dataset named ODD, for ORAB Detection Dataset. ODD is an expert-annotated dataset designed to identify ORABs from patients' EHR notes and classify them into nine categories; 1) Confirmed Aberrant Behavior, 2) Suggested Aberrant Behavior, 3) Opioids, 4) Indication, 5) Diagnosed opioid dependency, 6) Benzodiazepines, 7) Medication Changes, 8) Central Nervous System-related, and 9) Social Determinants of Health. We explored two state-of-the-art natural language processing models (fine-tuning and prompt-tuning approaches) to identify ORAB. Experimental results show that the prompt-tuning models outperformed the fine-tuning models in most categories and the gains were especially higher among uncommon categories (Suggested Aberrant Behavior, Confirmed Aberrant Behaviors, Diagnosed Opioid Dependence, and Medication Change). Although the best model achieved the highest 88.17% on macro average area under precision recall curve, uncommon classes still have a large room for performance improvement. ODD is publicly available.
1306.1889
Pradeep Singla
Aakash Gupta, Pradeep Singla, Jitendra Gupta, Nitin Maheshwari
An Improved Structure Of Reversible Adder And Subtractor
null
International Journal of Electronics and Computer Science Engineering, Vol 2, No. 2, pp712-718, June 2013
null
null
cs.AR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In today's world everyday a new technology which is faster, smaller and more complex than its predecessor is being developed. The increased number of transistors packed onto a chip of a conventional system results in increased power consumption that is why Reversible logic has drawn attention of Researchers due to its less heat dissipating characteristics. Reversible logic can be imposed over applications such as quantum computing, optical computing, quantum dot cellular automata, low power VLSI circuits, DNA computing. This paper presents the reversible combinational circuit of adder, subtractor and parity preserving subtractor. The suggested circuit in this paper are designed using Feynman, Double Feynman and MUX gates which are better than the existing one in literature in terms of Quantum cost, Garbage output and Total logical calculations.
[ { "created": "Sat, 8 Jun 2013 07:21:22 GMT", "version": "v1" } ]
2013-06-11
[ [ "Gupta", "Aakash", "" ], [ "Singla", "Pradeep", "" ], [ "Gupta", "Jitendra", "" ], [ "Maheshwari", "Nitin", "" ] ]
In today's world everyday a new technology which is faster, smaller and more complex than its predecessor is being developed. The increased number of transistors packed onto a chip of a conventional system results in increased power consumption that is why Reversible logic has drawn attention of Researchers due to its less heat dissipating characteristics. Reversible logic can be imposed over applications such as quantum computing, optical computing, quantum dot cellular automata, low power VLSI circuits, DNA computing. This paper presents the reversible combinational circuit of adder, subtractor and parity preserving subtractor. The suggested circuit in this paper are designed using Feynman, Double Feynman and MUX gates which are better than the existing one in literature in terms of Quantum cost, Garbage output and Total logical calculations.
2312.15935
Romain Abraham
Romain Abraham (IDP), Jean-Fran\c{c}ois Delmas (CERMICS), Julien Weibel (IDP, CERMICS)
Probability-graphons: Limits of large dense weighted graphs
null
null
null
null
cs.DM math.PR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce probability-graphons which are probability kernels that generalize graphons to the case of weighted graphs. Probability-graphons appear as the limit objects to study sequences of large weighted graphs whose distribution of subgraph sampling converge. The edge-weights are taken from a general Polish space, which also covers the case of decorated graphs. Here, graphs can be either directed or undirected. Starting from a distance $d_m$ inducing the weak topology on measures, we define a cut distance on probability-graphons, making it a Polish space, and study the properties of this cut distance. In particular, we exhibit a tightness criterion for probability-graphons related to relative compactness in the cut distance. We also prove that under some conditions on the distance $d_m$, which are satisfied for some well-know distances like the Prohorov distance, and the Fortet-Mourier and Kantorovitch-Rubinstein norms, the topology induced by the cut distance on the spaceof probability-graphons is independent from the choice of $d_m$. Eventually, we prove that this topology coincides with the topology induced by the convergence in distribution of the sampled subgraphs.
[ { "created": "Tue, 26 Dec 2023 07:59:59 GMT", "version": "v1" } ]
2023-12-27
[ [ "Abraham", "Romain", "", "IDP" ], [ "Delmas", "Jean-François", "", "CERMICS" ], [ "Weibel", "Julien", "", "IDP, CERMICS" ] ]
We introduce probability-graphons which are probability kernels that generalize graphons to the case of weighted graphs. Probability-graphons appear as the limit objects to study sequences of large weighted graphs whose distribution of subgraph sampling converge. The edge-weights are taken from a general Polish space, which also covers the case of decorated graphs. Here, graphs can be either directed or undirected. Starting from a distance $d_m$ inducing the weak topology on measures, we define a cut distance on probability-graphons, making it a Polish space, and study the properties of this cut distance. In particular, we exhibit a tightness criterion for probability-graphons related to relative compactness in the cut distance. We also prove that under some conditions on the distance $d_m$, which are satisfied for some well-know distances like the Prohorov distance, and the Fortet-Mourier and Kantorovitch-Rubinstein norms, the topology induced by the cut distance on the spaceof probability-graphons is independent from the choice of $d_m$. Eventually, we prove that this topology coincides with the topology induced by the convergence in distribution of the sampled subgraphs.
1707.06381
Suhwan Lim
Suhwan Lim, Jong-Ho Bae, Jai-Ho Eum, Sungtae Lee, Chul-Heung Kim, Dongseok Kwon, Byung-Gook Park, Jong-Ho Lee
Adaptive Learning Rule for Hardware-based Deep Neural Networks Using Electronic Synapse Devices
null
Neural Comput. Appl. (2018)
10.1007/s00521-018-3659-y
null
cs.NE cs.ET
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we propose a learning rule based on a back-propagation (BP) algorithm that can be applied to a hardware-based deep neural network (HW-DNN) using electronic devices that exhibit discrete and limited conductance characteristics. This adaptive learning rule, which enables forward, backward propagation, as well as weight updates in hardware, is helpful during the implementation of power-efficient and high-speed deep neural networks. In simulations using a three-layer perceptron network, we evaluate the learning performance according to various conductance responses of electronic synapse devices and weight-updating methods. It is shown that the learning accuracy is comparable to that obtained when using a software-based BP algorithm when the electronic synapse device has a linear conductance response with a high dynamic range. Furthermore, the proposed unidirectional weight-updating method is suitable for electronic synapse devices which have nonlinear and finite conductance responses. Because this weight-updating method can compensate the demerit of asymmetric weight updates, we can obtain better accuracy compared to other methods. This adaptive learning rule, which can be applied to full hardware implementation, can also compensate the degradation of learning accuracy due to the probable device-to-device variation in an actual electronic synapse device.
[ { "created": "Thu, 20 Jul 2017 06:10:36 GMT", "version": "v1" }, { "created": "Sat, 19 Aug 2017 11:42:23 GMT", "version": "v2" } ]
2018-08-02
[ [ "Lim", "Suhwan", "" ], [ "Bae", "Jong-Ho", "" ], [ "Eum", "Jai-Ho", "" ], [ "Lee", "Sungtae", "" ], [ "Kim", "Chul-Heung", "" ], [ "Kwon", "Dongseok", "" ], [ "Park", "Byung-Gook", "" ], [ "Lee", "Jong-Ho", "" ] ]
In this paper, we propose a learning rule based on a back-propagation (BP) algorithm that can be applied to a hardware-based deep neural network (HW-DNN) using electronic devices that exhibit discrete and limited conductance characteristics. This adaptive learning rule, which enables forward, backward propagation, as well as weight updates in hardware, is helpful during the implementation of power-efficient and high-speed deep neural networks. In simulations using a three-layer perceptron network, we evaluate the learning performance according to various conductance responses of electronic synapse devices and weight-updating methods. It is shown that the learning accuracy is comparable to that obtained when using a software-based BP algorithm when the electronic synapse device has a linear conductance response with a high dynamic range. Furthermore, the proposed unidirectional weight-updating method is suitable for electronic synapse devices which have nonlinear and finite conductance responses. Because this weight-updating method can compensate the demerit of asymmetric weight updates, we can obtain better accuracy compared to other methods. This adaptive learning rule, which can be applied to full hardware implementation, can also compensate the degradation of learning accuracy due to the probable device-to-device variation in an actual electronic synapse device.
2104.08253
Luyu Gao
Luyu Gao, Jamie Callan
Condenser: a Pre-training Architecture for Dense Retrieval
EMNLP 2021
null
null
null
cs.CL cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Pre-trained Transformer language models (LM) have become go-to text representation encoders. Prior research fine-tunes deep LMs to encode text sequences such as sentences and passages into single dense vector representations for efficient text comparison and retrieval. However, dense encoders require a lot of data and sophisticated techniques to effectively train and suffer in low data situations. This paper finds a key reason is that standard LMs' internal attention structure is not ready-to-use for dense encoders, which needs to aggregate text information into the dense representation. We propose to pre-train towards dense encoder with a novel Transformer architecture, Condenser, where LM prediction CONditions on DENSE Representation. Our experiments show Condenser improves over standard LM by large margins on various text retrieval and similarity tasks.
[ { "created": "Fri, 16 Apr 2021 17:36:44 GMT", "version": "v1" }, { "created": "Mon, 20 Sep 2021 18:07:10 GMT", "version": "v2" } ]
2021-09-22
[ [ "Gao", "Luyu", "" ], [ "Callan", "Jamie", "" ] ]
Pre-trained Transformer language models (LM) have become go-to text representation encoders. Prior research fine-tunes deep LMs to encode text sequences such as sentences and passages into single dense vector representations for efficient text comparison and retrieval. However, dense encoders require a lot of data and sophisticated techniques to effectively train and suffer in low data situations. This paper finds a key reason is that standard LMs' internal attention structure is not ready-to-use for dense encoders, which needs to aggregate text information into the dense representation. We propose to pre-train towards dense encoder with a novel Transformer architecture, Condenser, where LM prediction CONditions on DENSE Representation. Our experiments show Condenser improves over standard LM by large margins on various text retrieval and similarity tasks.
1711.03543
Anush Sankaran
Akshay Sethi, Anush Sankaran, Naveen Panwar, Shreya Khare, Senthil Mani
DLPaper2Code: Auto-generation of Code from Deep Learning Research Papers
AAAI2018
null
null
null
cs.LG cs.AI stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With an abundance of research papers in deep learning, reproducibility or adoption of the existing works becomes a challenge. This is due to the lack of open source implementations provided by the authors. Further, re-implementing research papers in a different library is a daunting task. To address these challenges, we propose a novel extensible approach, DLPaper2Code, to extract and understand deep learning design flow diagrams and tables available in a research paper and convert them to an abstract computational graph. The extracted computational graph is then converted into execution ready source code in both Keras and Caffe, in real-time. An arXiv-like website is created where the automatically generated designs is made publicly available for 5,000 research papers. The generated designs could be rated and edited using an intuitive drag-and-drop UI framework in a crowdsourced manner. To evaluate our approach, we create a simulated dataset with over 216,000 valid design visualizations using a manually defined grammar. Experiments on the simulated dataset show that the proposed framework provide more than $93\%$ accuracy in flow diagram content extraction.
[ { "created": "Thu, 9 Nov 2017 10:00:19 GMT", "version": "v1" } ]
2017-11-13
[ [ "Sethi", "Akshay", "" ], [ "Sankaran", "Anush", "" ], [ "Panwar", "Naveen", "" ], [ "Khare", "Shreya", "" ], [ "Mani", "Senthil", "" ] ]
With an abundance of research papers in deep learning, reproducibility or adoption of the existing works becomes a challenge. This is due to the lack of open source implementations provided by the authors. Further, re-implementing research papers in a different library is a daunting task. To address these challenges, we propose a novel extensible approach, DLPaper2Code, to extract and understand deep learning design flow diagrams and tables available in a research paper and convert them to an abstract computational graph. The extracted computational graph is then converted into execution ready source code in both Keras and Caffe, in real-time. An arXiv-like website is created where the automatically generated designs is made publicly available for 5,000 research papers. The generated designs could be rated and edited using an intuitive drag-and-drop UI framework in a crowdsourced manner. To evaluate our approach, we create a simulated dataset with over 216,000 valid design visualizations using a manually defined grammar. Experiments on the simulated dataset show that the proposed framework provide more than $93\%$ accuracy in flow diagram content extraction.
2303.12739
Raoul Sch\"onhof
Jannes Elstner and Raoul G. C. Sch\"onhof and Steffen Tauber and Marco F Huber
Optimizing CAD Models with Latent Space Manipulation
null
null
null
null
cs.CV cs.AI cs.CE cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
When it comes to the optimization of CAD models in the automation domain, neural networks currently play only a minor role. Optimizing abstract features such as automation capability is challenging, since they can be very difficult to simulate, are too complex for rule-based systems, and also have little to no data available for machine-learning methods. On the other hand, image manipulation methods that can manipulate abstract features in images such as StyleCLIP have seen much success. They rely on the latent space of pretrained generative adversarial networks, and could therefore also make use of the vast amount of unlabeled CAD data. In this paper, we show that such an approach is also suitable for optimizing abstract automation-related features of CAD parts. We achieved this by extending StyleCLIP to work with CAD models in the form of voxel models, which includes using a 3D StyleGAN and a custom classifier. Finally, we demonstrate the ability of our system for the optimiziation of automation-related features by optimizing the grabability of various CAD models. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/) Peer review under the responsibility of the scientific committee of the 33rd CIRP Design Conference.
[ { "created": "Thu, 9 Mar 2023 08:25:09 GMT", "version": "v1" } ]
2023-03-23
[ [ "Elstner", "Jannes", "" ], [ "Schönhof", "Raoul G. C.", "" ], [ "Tauber", "Steffen", "" ], [ "Huber", "Marco F", "" ] ]
When it comes to the optimization of CAD models in the automation domain, neural networks currently play only a minor role. Optimizing abstract features such as automation capability is challenging, since they can be very difficult to simulate, are too complex for rule-based systems, and also have little to no data available for machine-learning methods. On the other hand, image manipulation methods that can manipulate abstract features in images such as StyleCLIP have seen much success. They rely on the latent space of pretrained generative adversarial networks, and could therefore also make use of the vast amount of unlabeled CAD data. In this paper, we show that such an approach is also suitable for optimizing abstract automation-related features of CAD parts. We achieved this by extending StyleCLIP to work with CAD models in the form of voxel models, which includes using a 3D StyleGAN and a custom classifier. Finally, we demonstrate the ability of our system for the optimiziation of automation-related features by optimizing the grabability of various CAD models. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/) Peer review under the responsibility of the scientific committee of the 33rd CIRP Design Conference.
1707.01204
Santosh Vempala
Manuel Blum and Santosh Vempala
The Complexity of Human Computation: A Concrete Model with an Application to Passwords
null
null
null
null
cs.HC cs.CC cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
What can humans compute in their heads? We are thinking of a variety of Crypto Protocols, games like Sudoku, Crossword Puzzles, Speed Chess, and so on. The intent of this paper is to apply the ideas and methods of theoretical computer science to better understand what humans can compute in their heads. For example, can a person compute a function in their head so that an eavesdropper with a powerful computer --- who sees the responses to random input --- still cannot infer responses to new inputs? To address such questions, we propose a rigorous model of human computation and associated measures of complexity. We apply the model and measures first and foremost to the problem of (1) humanly computable password generation, and then consider related problems of (2) humanly computable "one-way functions" and (3) humanly computable "pseudorandom generators". The theory of Human Computability developed here plays by different rules than standard computability, and this takes some getting used to. For reasons to be made clear, the polynomial versus exponential time divide of modern computability theory is irrelevant to human computation. In human computability, the step-counts for both humans and computers must be more concrete. Specifically, we restrict the adversary to at most 10^24 (Avogadro number of) steps. An alternate view of this work is that it deals with the analysis of algorithms and counting steps for the case that inputs are small as opposed to the usual case of inputs large-in-the-limit.
[ { "created": "Wed, 5 Jul 2017 03:25:52 GMT", "version": "v1" } ]
2017-07-06
[ [ "Blum", "Manuel", "" ], [ "Vempala", "Santosh", "" ] ]
What can humans compute in their heads? We are thinking of a variety of Crypto Protocols, games like Sudoku, Crossword Puzzles, Speed Chess, and so on. The intent of this paper is to apply the ideas and methods of theoretical computer science to better understand what humans can compute in their heads. For example, can a person compute a function in their head so that an eavesdropper with a powerful computer --- who sees the responses to random input --- still cannot infer responses to new inputs? To address such questions, we propose a rigorous model of human computation and associated measures of complexity. We apply the model and measures first and foremost to the problem of (1) humanly computable password generation, and then consider related problems of (2) humanly computable "one-way functions" and (3) humanly computable "pseudorandom generators". The theory of Human Computability developed here plays by different rules than standard computability, and this takes some getting used to. For reasons to be made clear, the polynomial versus exponential time divide of modern computability theory is irrelevant to human computation. In human computability, the step-counts for both humans and computers must be more concrete. Specifically, we restrict the adversary to at most 10^24 (Avogadro number of) steps. An alternate view of this work is that it deals with the analysis of algorithms and counting steps for the case that inputs are small as opposed to the usual case of inputs large-in-the-limit.
2204.02601
Yanyang Li
Yanyang Li, Fuli Luo, Runxin Xu, Songfang Huang, Fei Huang, Liwei Wang
Probing Structured Pruning on Multilingual Pre-trained Models: Settings, Algorithms, and Efficiency
ACL 2022 Main Conference, Camera-ready version
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Structured pruning has been extensively studied on monolingual pre-trained language models and is yet to be fully evaluated on their multilingual counterparts. This work investigates three aspects of structured pruning on multilingual pre-trained language models: settings, algorithms, and efficiency. Experiments on nine downstream tasks show several counter-intuitive phenomena: for settings, individually pruning for each language does not induce a better result; for algorithms, the simplest method performs the best; for efficiency, a fast model does not imply that it is also small. To facilitate the comparison on all sparsity levels, we present Dynamic Sparsification, a simple approach that allows training the model once and adapting to different model sizes at inference. We hope this work fills the gap in the study of structured pruning on multilingual pre-trained models and sheds light on future research.
[ { "created": "Wed, 6 Apr 2022 06:29:52 GMT", "version": "v1" } ]
2022-04-07
[ [ "Li", "Yanyang", "" ], [ "Luo", "Fuli", "" ], [ "Xu", "Runxin", "" ], [ "Huang", "Songfang", "" ], [ "Huang", "Fei", "" ], [ "Wang", "Liwei", "" ] ]
Structured pruning has been extensively studied on monolingual pre-trained language models and is yet to be fully evaluated on their multilingual counterparts. This work investigates three aspects of structured pruning on multilingual pre-trained language models: settings, algorithms, and efficiency. Experiments on nine downstream tasks show several counter-intuitive phenomena: for settings, individually pruning for each language does not induce a better result; for algorithms, the simplest method performs the best; for efficiency, a fast model does not imply that it is also small. To facilitate the comparison on all sparsity levels, we present Dynamic Sparsification, a simple approach that allows training the model once and adapting to different model sizes at inference. We hope this work fills the gap in the study of structured pruning on multilingual pre-trained models and sheds light on future research.
2009.08704
Aythami Morales
Alejandro Pe\~na and Julian Fierrez and Agata Lapedriza and Aythami Morales
Learning Emotional-Blinded Face Representations
IAPR Intl. Conf. on Pattern Recognition, 2020
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose two face representations that are blind to facial expressions associated to emotional responses. This work is in part motivated by new international regulations for personal data protection, which enforce data controllers to protect any kind of sensitive information involved in automatic processes. The advances in Affective Computing have contributed to improve human-machine interfaces but, at the same time, the capacity to monitorize emotional responses triggers potential risks for humans, both in terms of fairness and privacy. We propose two different methods to learn these expression-blinded facial features. We show that it is possible to eliminate information related to emotion recognition tasks, while the performance of subject verification, gender recognition, and ethnicity classification are just slightly affected. We also present an application to train fairer classifiers in a case study of attractiveness classification with respect to a protected facial expression attribute. The results demonstrate that it is possible to reduce emotional information in the face representation while retaining competitive performance in other face-based artificial intelligence tasks.
[ { "created": "Fri, 18 Sep 2020 09:24:10 GMT", "version": "v1" } ]
2020-09-21
[ [ "Peña", "Alejandro", "" ], [ "Fierrez", "Julian", "" ], [ "Lapedriza", "Agata", "" ], [ "Morales", "Aythami", "" ] ]
We propose two face representations that are blind to facial expressions associated to emotional responses. This work is in part motivated by new international regulations for personal data protection, which enforce data controllers to protect any kind of sensitive information involved in automatic processes. The advances in Affective Computing have contributed to improve human-machine interfaces but, at the same time, the capacity to monitorize emotional responses triggers potential risks for humans, both in terms of fairness and privacy. We propose two different methods to learn these expression-blinded facial features. We show that it is possible to eliminate information related to emotion recognition tasks, while the performance of subject verification, gender recognition, and ethnicity classification are just slightly affected. We also present an application to train fairer classifiers in a case study of attractiveness classification with respect to a protected facial expression attribute. The results demonstrate that it is possible to reduce emotional information in the face representation while retaining competitive performance in other face-based artificial intelligence tasks.
2009.04177
Ke Zhang
Ke Zhang, Yukun Su, Xiwang Guo, Liang Qi, and Zhenbing Zhao
MU-GAN: Facial Attribute Editing based on Multi-attention Mechanism
12 pages, 10 figures
null
null
null
cs.CV cs.GR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Facial attribute editing has mainly two objectives: 1) translating image from a source domain to a target one, and 2) only changing the facial regions related to a target attribute and preserving the attribute-excluding details. In this work, we propose a Multi-attention U-Net-based Generative Adversarial Network (MU-GAN). First, we replace a classic convolutional encoder-decoder with a symmetric U-Net-like structure in a generator, and then apply an additive attention mechanism to build attention-based U-Net connections for adaptively transferring encoder representations to complement a decoder with attribute-excluding detail and enhance attribute editing ability. Second, a self-attention mechanism is incorporated into convolutional layers for modeling long-range and multi-level dependencies across image regions. experimental results indicate that our method is capable of balancing attribute editing ability and details preservation ability, and can decouple the correlation among attributes. It outperforms the state-of-the-art methods in terms of attribute manipulation accuracy and image quality.
[ { "created": "Wed, 9 Sep 2020 09:25:04 GMT", "version": "v1" } ]
2020-09-10
[ [ "Zhang", "Ke", "" ], [ "Su", "Yukun", "" ], [ "Guo", "Xiwang", "" ], [ "Qi", "Liang", "" ], [ "Zhao", "Zhenbing", "" ] ]
Facial attribute editing has mainly two objectives: 1) translating image from a source domain to a target one, and 2) only changing the facial regions related to a target attribute and preserving the attribute-excluding details. In this work, we propose a Multi-attention U-Net-based Generative Adversarial Network (MU-GAN). First, we replace a classic convolutional encoder-decoder with a symmetric U-Net-like structure in a generator, and then apply an additive attention mechanism to build attention-based U-Net connections for adaptively transferring encoder representations to complement a decoder with attribute-excluding detail and enhance attribute editing ability. Second, a self-attention mechanism is incorporated into convolutional layers for modeling long-range and multi-level dependencies across image regions. experimental results indicate that our method is capable of balancing attribute editing ability and details preservation ability, and can decouple the correlation among attributes. It outperforms the state-of-the-art methods in terms of attribute manipulation accuracy and image quality.
1706.06936
Kushagra Singhal
Kushagra Singhal, Daniel Cullina, Negar Kiyavash
Significance of Side Information in the Graph Matching Problem
null
null
null
null
cs.SI physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Percolation based graph matching algorithms rely on the availability of seed vertex pairs as side information to efficiently match users across networks. Although such algorithms work well in practice, there are other types of side information available which are potentially useful to an attacker. In this paper, we consider the problem of matching two correlated graphs when an attacker has access to side information, either in the form of community labels or an imperfect initial matching. In the former case, we propose a naive graph matching algorithm by introducing the community degree vectors which harness the information from community labels in an efficient manner. Furthermore, we analyze a variant of the basic percolation algorithm proposed in literature for graphs with community structure. In the latter case, we propose a novel percolation algorithm with two thresholds which uses an imperfect matching as input to match correlated graphs. We evaluate the proposed algorithms on synthetic as well as real world datasets using various experiments. The experimental results demonstrate the importance of communities as side information especially when the number of seeds is small and the networks are weakly correlated.
[ { "created": "Wed, 21 Jun 2017 14:42:19 GMT", "version": "v1" } ]
2017-06-22
[ [ "Singhal", "Kushagra", "" ], [ "Cullina", "Daniel", "" ], [ "Kiyavash", "Negar", "" ] ]
Percolation based graph matching algorithms rely on the availability of seed vertex pairs as side information to efficiently match users across networks. Although such algorithms work well in practice, there are other types of side information available which are potentially useful to an attacker. In this paper, we consider the problem of matching two correlated graphs when an attacker has access to side information, either in the form of community labels or an imperfect initial matching. In the former case, we propose a naive graph matching algorithm by introducing the community degree vectors which harness the information from community labels in an efficient manner. Furthermore, we analyze a variant of the basic percolation algorithm proposed in literature for graphs with community structure. In the latter case, we propose a novel percolation algorithm with two thresholds which uses an imperfect matching as input to match correlated graphs. We evaluate the proposed algorithms on synthetic as well as real world datasets using various experiments. The experimental results demonstrate the importance of communities as side information especially when the number of seeds is small and the networks are weakly correlated.
1809.08311
Carl Pearson
Carl Pearson and Abdul Dakkak and Cheng Li and Sarah Hashash and Jinjun Xiong and Wen-mei Hwu
SCOPE: C3SR Systems Characterization and Benchmarking Framework
8 pages, draft
null
null
null
cs.PF
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This report presents the design of the Scope infrastructure for extensible and portable benchmarking. Improvements in high- performance computing systems rely on coordination across different levels of system abstraction. Developing and defining accurate performance measurements is necessary at all levels of the system hierarchy, and should be as accessible as possible to developers with different backgrounds. The Scope project aims to lower the barrier to entry for developing performance benchmarks by providing a software architecture that allows benchmarks to be developed independently, by providing useful C/C++ abstractions and utilities, and by providing a Python package for generating publication-quality plots of resulting measurements.
[ { "created": "Tue, 18 Sep 2018 20:25:44 GMT", "version": "v1" } ]
2018-09-25
[ [ "Pearson", "Carl", "" ], [ "Dakkak", "Abdul", "" ], [ "Li", "Cheng", "" ], [ "Hashash", "Sarah", "" ], [ "Xiong", "Jinjun", "" ], [ "Hwu", "Wen-mei", "" ] ]
This report presents the design of the Scope infrastructure for extensible and portable benchmarking. Improvements in high- performance computing systems rely on coordination across different levels of system abstraction. Developing and defining accurate performance measurements is necessary at all levels of the system hierarchy, and should be as accessible as possible to developers with different backgrounds. The Scope project aims to lower the barrier to entry for developing performance benchmarks by providing a software architecture that allows benchmarks to be developed independently, by providing useful C/C++ abstractions and utilities, and by providing a Python package for generating publication-quality plots of resulting measurements.
cs/0009001
Andrei N. Soklakov
Andrei N. Soklakov (Royal Holloway, University of London)
Complexity analysis for algorithmically simple strings
10 pages
null
null
null
cs.LG
null
Given a reference computer, Kolmogorov complexity is a well defined function on all binary strings. In the standard approach, however, only the asymptotic properties of such functions are considered because they do not depend on the reference computer. We argue that this approach can be more useful if it is refined to include an important practical case of simple binary strings. Kolmogorov complexity calculus may be developed for this case if we restrict the class of available reference computers. The interesting problem is to define a class of computers which is restricted in a {\it natural} way modeling the real-life situation where only a limited class of computers is physically available to us. We give an example of what such a natural restriction might look like mathematically, and show that under such restrictions some error terms, even logarithmic in complexity, can disappear from the standard complexity calculus. Keywords: Kolmogorov complexity; Algorithmic information theory.
[ { "created": "Tue, 5 Sep 2000 18:54:58 GMT", "version": "v1" }, { "created": "Mon, 18 Jun 2001 03:22:43 GMT", "version": "v2" }, { "created": "Tue, 26 Feb 2002 01:51:09 GMT", "version": "v3" } ]
2007-05-23
[ [ "Soklakov", "Andrei N.", "", "Royal Holloway, University of London" ] ]
Given a reference computer, Kolmogorov complexity is a well defined function on all binary strings. In the standard approach, however, only the asymptotic properties of such functions are considered because they do not depend on the reference computer. We argue that this approach can be more useful if it is refined to include an important practical case of simple binary strings. Kolmogorov complexity calculus may be developed for this case if we restrict the class of available reference computers. The interesting problem is to define a class of computers which is restricted in a {\it natural} way modeling the real-life situation where only a limited class of computers is physically available to us. We give an example of what such a natural restriction might look like mathematically, and show that under such restrictions some error terms, even logarithmic in complexity, can disappear from the standard complexity calculus. Keywords: Kolmogorov complexity; Algorithmic information theory.
2305.20009
Samuel Stanton
Nate Gruver, Samuel Stanton, Nathan C. Frey, Tim G. J. Rudner, Isidro Hotzel, Julien Lafrance-Vanasse, Arvind Rajpal, Kyunghyun Cho, and Andrew Gordon Wilson
Protein Design with Guided Discrete Diffusion
null
Advances in Neural Information Processing Systems 36, December 10-16, 2023
null
null
cs.LG q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A popular approach to protein design is to combine a generative model with a discriminative model for conditional sampling. The generative model samples plausible sequences while the discriminative model guides a search for sequences with high fitness. Given its broad success in conditional sampling, classifier-guided diffusion modeling is a promising foundation for protein design, leading many to develop guided diffusion models for structure with inverse folding to recover sequences. In this work, we propose diffusioN Optimized Sampling (NOS), a guidance method for discrete diffusion models that follows gradients in the hidden states of the denoising network. NOS makes it possible to perform design directly in sequence space, circumventing significant limitations of structure-based methods, including scarce data and challenging inverse design. Moreover, we use NOS to generalize LaMBO, a Bayesian optimization procedure for sequence design that facilitates multiple objectives and edit-based constraints. The resulting method, LaMBO-2, enables discrete diffusions and stronger performance with limited edits through a novel application of saliency maps. We apply LaMBO-2 to a real-world protein design task, optimizing antibodies for higher expression yield and binding affinity to several therapeutic targets under locality and developability constraints, attaining a 99% expression rate and 40% binding rate in exploratory in vitro experiments.
[ { "created": "Wed, 31 May 2023 16:31:24 GMT", "version": "v1" }, { "created": "Tue, 12 Dec 2023 05:09:38 GMT", "version": "v2" } ]
2023-12-13
[ [ "Gruver", "Nate", "" ], [ "Stanton", "Samuel", "" ], [ "Frey", "Nathan C.", "" ], [ "Rudner", "Tim G. J.", "" ], [ "Hotzel", "Isidro", "" ], [ "Lafrance-Vanasse", "Julien", "" ], [ "Rajpal", "Arvind", "" ], [ "Cho", "Kyunghyun", "" ], [ "Wilson", "Andrew Gordon", "" ] ]
A popular approach to protein design is to combine a generative model with a discriminative model for conditional sampling. The generative model samples plausible sequences while the discriminative model guides a search for sequences with high fitness. Given its broad success in conditional sampling, classifier-guided diffusion modeling is a promising foundation for protein design, leading many to develop guided diffusion models for structure with inverse folding to recover sequences. In this work, we propose diffusioN Optimized Sampling (NOS), a guidance method for discrete diffusion models that follows gradients in the hidden states of the denoising network. NOS makes it possible to perform design directly in sequence space, circumventing significant limitations of structure-based methods, including scarce data and challenging inverse design. Moreover, we use NOS to generalize LaMBO, a Bayesian optimization procedure for sequence design that facilitates multiple objectives and edit-based constraints. The resulting method, LaMBO-2, enables discrete diffusions and stronger performance with limited edits through a novel application of saliency maps. We apply LaMBO-2 to a real-world protein design task, optimizing antibodies for higher expression yield and binding affinity to several therapeutic targets under locality and developability constraints, attaining a 99% expression rate and 40% binding rate in exploratory in vitro experiments.
2305.00208
Abdul Karim Gizzini
Abdul Karim Gizzini, Marwa Chafii
Deep Learning Based Channel Estimation in High Mobility Communications Using Bi-RNN Networks
Accepted for presentation at IEEE 2023 IEEE International Conference on Communications (ICC), 28 May - 01 June 2023, Rome, Italy
null
null
null
cs.IT cs.AI math.IT
http://creativecommons.org/licenses/by-nc-nd/4.0/
Doubly-selective channel estimation represents a key element in ensuring communication reliability in wireless systems. Due to the impact of multi-path propagation and Doppler interference in dynamic environments, doubly-selective channel estimation becomes challenging. Conventional channel estimation schemes encounter performance degradation in high mobility scenarios due to the usage of limited training pilots. Recently, deep learning (DL) has been utilized for doubly-selective channel estimation, where convolutional neural network (CNN) networks are employed in the frame-by-frame (FBF) channel estimation. However, CNN-based estimators require high complexity, making them impractical in real-case scenarios. For this purpose, we overcome this issue by proposing an optimized and robust bi-directional recurrent neural network (Bi-RNN) based channel estimator to accurately estimate the doubly-selective channel, especially in high mobility scenarios. The proposed estimator is based on performing end-to-end interpolation using gated recurrent unit (GRU) unit. Extensive numerical experiments demonstrate that the developed Bi-GRU estimator significantly outperforms the recently proposed CNN-based estimators in different mobility scenarios, while substantially reducing the overall computational complexity.
[ { "created": "Sat, 29 Apr 2023 09:20:28 GMT", "version": "v1" } ]
2023-05-02
[ [ "Gizzini", "Abdul Karim", "" ], [ "Chafii", "Marwa", "" ] ]
Doubly-selective channel estimation represents a key element in ensuring communication reliability in wireless systems. Due to the impact of multi-path propagation and Doppler interference in dynamic environments, doubly-selective channel estimation becomes challenging. Conventional channel estimation schemes encounter performance degradation in high mobility scenarios due to the usage of limited training pilots. Recently, deep learning (DL) has been utilized for doubly-selective channel estimation, where convolutional neural network (CNN) networks are employed in the frame-by-frame (FBF) channel estimation. However, CNN-based estimators require high complexity, making them impractical in real-case scenarios. For this purpose, we overcome this issue by proposing an optimized and robust bi-directional recurrent neural network (Bi-RNN) based channel estimator to accurately estimate the doubly-selective channel, especially in high mobility scenarios. The proposed estimator is based on performing end-to-end interpolation using gated recurrent unit (GRU) unit. Extensive numerical experiments demonstrate that the developed Bi-GRU estimator significantly outperforms the recently proposed CNN-based estimators in different mobility scenarios, while substantially reducing the overall computational complexity.
2405.11574
Yash Sanjay Bhalgat
Manan Shah, Yash Bhalgat
Reproducibility Study of CDUL: CLIP-Driven Unsupervised Learning for Multi-Label Image Classification
Reproducibility study
null
null
null
cs.CV cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
This report is a reproducibility study of the paper "CDUL: CLIP-Driven Unsupervised Learning for Multi-Label Image Classification" (Abdelfattah et al, ICCV 2023). Our report makes the following contributions: (1) We provide a reproducible, well commented and open-sourced code implementation for the entire method specified in the original paper. (2) We try to verify the effectiveness of the novel aggregation strategy which uses the CLIP model to initialize the pseudo labels for the subsequent unsupervised multi-label image classification task. (3) We try to verify the effectiveness of the gradient-alignment training method specified in the original paper, which is used to update the network parameters and pseudo labels. The code can be found at https://github.com/cs-mshah/CDUL
[ { "created": "Sun, 19 May 2024 14:48:19 GMT", "version": "v1" } ]
2024-05-21
[ [ "Shah", "Manan", "" ], [ "Bhalgat", "Yash", "" ] ]
This report is a reproducibility study of the paper "CDUL: CLIP-Driven Unsupervised Learning for Multi-Label Image Classification" (Abdelfattah et al, ICCV 2023). Our report makes the following contributions: (1) We provide a reproducible, well commented and open-sourced code implementation for the entire method specified in the original paper. (2) We try to verify the effectiveness of the novel aggregation strategy which uses the CLIP model to initialize the pseudo labels for the subsequent unsupervised multi-label image classification task. (3) We try to verify the effectiveness of the gradient-alignment training method specified in the original paper, which is used to update the network parameters and pseudo labels. The code can be found at https://github.com/cs-mshah/CDUL
2405.20715
Diabul Haque
Diabul Haque
Transforming Japan Real Estate
null
null
null
null
cs.CE econ.EM q-fin.ST
http://creativecommons.org/licenses/by/4.0/
The Japanese real estate market, valued over 35 trillion USD, offers significant investment opportunities. Accurate rent and price forecasting could provide a substantial competitive edge. This paper explores using alternative data variables to predict real estate performance in 1100 Japanese municipalities. A comprehensive house price index was created, covering all municipalities from 2005 to the present, using a dataset of over 5 million transactions. This core dataset was enriched with economic factors spanning decades, allowing for price trajectory predictions. The findings show that alternative data variables can indeed forecast real estate performance effectively. Investment signals based on these variables yielded notable returns with low volatility. For example, the net migration ratio delivered an annualized return of 4.6% with a Sharpe ratio of 1.5. Taxable income growth and new dwellings ratio also performed well, with annualized returns of 4.1% (Sharpe ratio of 1.3) and 3.3% (Sharpe ratio of 0.9), respectively. When combined with transformer models to predict risk-adjusted returns 4 years in advance, the model achieved an R-squared score of 0.28, explaining nearly 30% of the variation in future municipality prices. These results highlight the potential of alternative data variables in real estate investment. They underscore the need for further research to identify more predictive factors. Nonetheless, the evidence suggests that such data can provide valuable insights into real estate price drivers, enabling more informed investment decisions in the Japanese market.
[ { "created": "Fri, 31 May 2024 09:12:28 GMT", "version": "v1" } ]
2024-06-03
[ [ "Haque", "Diabul", "" ] ]
The Japanese real estate market, valued over 35 trillion USD, offers significant investment opportunities. Accurate rent and price forecasting could provide a substantial competitive edge. This paper explores using alternative data variables to predict real estate performance in 1100 Japanese municipalities. A comprehensive house price index was created, covering all municipalities from 2005 to the present, using a dataset of over 5 million transactions. This core dataset was enriched with economic factors spanning decades, allowing for price trajectory predictions. The findings show that alternative data variables can indeed forecast real estate performance effectively. Investment signals based on these variables yielded notable returns with low volatility. For example, the net migration ratio delivered an annualized return of 4.6% with a Sharpe ratio of 1.5. Taxable income growth and new dwellings ratio also performed well, with annualized returns of 4.1% (Sharpe ratio of 1.3) and 3.3% (Sharpe ratio of 0.9), respectively. When combined with transformer models to predict risk-adjusted returns 4 years in advance, the model achieved an R-squared score of 0.28, explaining nearly 30% of the variation in future municipality prices. These results highlight the potential of alternative data variables in real estate investment. They underscore the need for further research to identify more predictive factors. Nonetheless, the evidence suggests that such data can provide valuable insights into real estate price drivers, enabling more informed investment decisions in the Japanese market.
2209.11345
Marcos V. Conde
Marcos V. Conde, Ui-Jin Choi, Maxime Burchi, Radu Timofte
Swin2SR: SwinV2 Transformer for Compressed Image Super-Resolution and Restoration
European Conference on Computer Vision (ECCV 2022) Workshops
null
null
null
cs.CV eess.IV
http://creativecommons.org/licenses/by-nc-nd/4.0/
Compression plays an important role on the efficient transmission and storage of images and videos through band-limited systems such as streaming services, virtual reality or videogames. However, compression unavoidably leads to artifacts and the loss of the original information, which may severely degrade the visual quality. For these reasons, quality enhancement of compressed images has become a popular research topic. While most state-of-the-art image restoration methods are based on convolutional neural networks, other transformers-based methods such as SwinIR, show impressive performance on these tasks. In this paper, we explore the novel Swin Transformer V2, to improve SwinIR for image super-resolution, and in particular, the compressed input scenario. Using this method we can tackle the major issues in training transformer vision models, such as training instability, resolution gaps between pre-training and fine-tuning, and hunger on data. We conduct experiments on three representative tasks: JPEG compression artifacts removal, image super-resolution (classical and lightweight), and compressed image super-resolution. Experimental results demonstrate that our method, Swin2SR, can improve the training convergence and performance of SwinIR, and is a top-5 solution at the "AIM 2022 Challenge on Super-Resolution of Compressed Image and Video".
[ { "created": "Thu, 22 Sep 2022 23:25:08 GMT", "version": "v1" } ]
2022-09-26
[ [ "Conde", "Marcos V.", "" ], [ "Choi", "Ui-Jin", "" ], [ "Burchi", "Maxime", "" ], [ "Timofte", "Radu", "" ] ]
Compression plays an important role on the efficient transmission and storage of images and videos through band-limited systems such as streaming services, virtual reality or videogames. However, compression unavoidably leads to artifacts and the loss of the original information, which may severely degrade the visual quality. For these reasons, quality enhancement of compressed images has become a popular research topic. While most state-of-the-art image restoration methods are based on convolutional neural networks, other transformers-based methods such as SwinIR, show impressive performance on these tasks. In this paper, we explore the novel Swin Transformer V2, to improve SwinIR for image super-resolution, and in particular, the compressed input scenario. Using this method we can tackle the major issues in training transformer vision models, such as training instability, resolution gaps between pre-training and fine-tuning, and hunger on data. We conduct experiments on three representative tasks: JPEG compression artifacts removal, image super-resolution (classical and lightweight), and compressed image super-resolution. Experimental results demonstrate that our method, Swin2SR, can improve the training convergence and performance of SwinIR, and is a top-5 solution at the "AIM 2022 Challenge on Super-Resolution of Compressed Image and Video".
1905.05408
Kyunghwan Son
Kyunghwan Son, Daewoo Kim, Wan Ju Kang, David Earl Hostallero, Yung Yi
QTRAN: Learning to Factorize with Transformation for Cooperative Multi-Agent Reinforcement Learning
18 pages; Accepted to ICML 2019
null
null
null
cs.LG cs.AI cs.MA stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We explore value-based solutions for multi-agent reinforcement learning (MARL) tasks in the centralized training with decentralized execution (CTDE) regime popularized recently. However, VDN and QMIX are representative examples that use the idea of factorization of the joint action-value function into individual ones for decentralized execution. VDN and QMIX address only a fraction of factorizable MARL tasks due to their structural constraint in factorization such as additivity and monotonicity. In this paper, we propose a new factorization method for MARL, QTRAN, which is free from such structural constraints and takes on a new approach to transforming the original joint action-value function into an easily factorizable one, with the same optimal actions. QTRAN guarantees more general factorization than VDN or QMIX, thus covering a much wider class of MARL tasks than does previous methods. Our experiments for the tasks of multi-domain Gaussian-squeeze and modified predator-prey demonstrate QTRAN's superior performance with especially larger margins in games whose payoffs penalize non-cooperative behavior more aggressively.
[ { "created": "Tue, 14 May 2019 06:29:51 GMT", "version": "v1" } ]
2019-05-15
[ [ "Son", "Kyunghwan", "" ], [ "Kim", "Daewoo", "" ], [ "Kang", "Wan Ju", "" ], [ "Hostallero", "David Earl", "" ], [ "Yi", "Yung", "" ] ]
We explore value-based solutions for multi-agent reinforcement learning (MARL) tasks in the centralized training with decentralized execution (CTDE) regime popularized recently. However, VDN and QMIX are representative examples that use the idea of factorization of the joint action-value function into individual ones for decentralized execution. VDN and QMIX address only a fraction of factorizable MARL tasks due to their structural constraint in factorization such as additivity and monotonicity. In this paper, we propose a new factorization method for MARL, QTRAN, which is free from such structural constraints and takes on a new approach to transforming the original joint action-value function into an easily factorizable one, with the same optimal actions. QTRAN guarantees more general factorization than VDN or QMIX, thus covering a much wider class of MARL tasks than does previous methods. Our experiments for the tasks of multi-domain Gaussian-squeeze and modified predator-prey demonstrate QTRAN's superior performance with especially larger margins in games whose payoffs penalize non-cooperative behavior more aggressively.
1803.04074
Susana Vidrio-Bar\'on
Susana B. Vidrio Bar\'on, Andrew W. Luse, Anthony M. Townsend
Development of a culturally-oriented website usability evaluation
15th Americas Conference on Information Systems 2009
null
null
null
cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
As the uni-cultural studies of website usability have matured, the paucity of cross-cultural studies of usability become increasingly apparent. Moving toward these cross-cultural studies will require the development of a new tool to assess website usability in the context of cultural dimensions. This paper introduces the preliminary results from the first phase of this project and then presents the proposed method for the research in progress that specifically is directed to the development and quantitative evaluation of a measurement scale of a culture sensitive measurement of website usability. The recognition of the need to develop this scale resulted from the identification of culture-related shortcomings of previous measurement tools that have been used widely within the Management of Information Systems (MIS) literature.
[ { "created": "Mon, 12 Mar 2018 00:39:08 GMT", "version": "v1" } ]
2018-03-13
[ [ "Barón", "Susana B. Vidrio", "" ], [ "Luse", "Andrew W.", "" ], [ "Townsend", "Anthony M.", "" ] ]
As the uni-cultural studies of website usability have matured, the paucity of cross-cultural studies of usability become increasingly apparent. Moving toward these cross-cultural studies will require the development of a new tool to assess website usability in the context of cultural dimensions. This paper introduces the preliminary results from the first phase of this project and then presents the proposed method for the research in progress that specifically is directed to the development and quantitative evaluation of a measurement scale of a culture sensitive measurement of website usability. The recognition of the need to develop this scale resulted from the identification of culture-related shortcomings of previous measurement tools that have been used widely within the Management of Information Systems (MIS) literature.
1603.02776
Yang Liu
Yang Liu, Sujian Li, Xiaodong Zhang and Zhifang Sui
Implicit Discourse Relation Classification via Multi-Task Neural Networks
This is the pre-print version of a paper accepted by AAAI-16
null
null
null
cs.CL cs.AI cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Without discourse connectives, classifying implicit discourse relations is a challenging task and a bottleneck for building a practical discourse parser. Previous research usually makes use of one kind of discourse framework such as PDTB or RST to improve the classification performance on discourse relations. Actually, under different discourse annotation frameworks, there exist multiple corpora which have internal connections. To exploit the combination of different discourse corpora, we design related discourse classification tasks specific to a corpus, and propose a novel Convolutional Neural Network embedded multi-task learning system to synthesize these tasks by learning both unique and shared representations for each task. The experimental results on the PDTB implicit discourse relation classification task demonstrate that our model achieves significant gains over baseline systems.
[ { "created": "Wed, 9 Mar 2016 03:13:37 GMT", "version": "v1" } ]
2016-03-10
[ [ "Liu", "Yang", "" ], [ "Li", "Sujian", "" ], [ "Zhang", "Xiaodong", "" ], [ "Sui", "Zhifang", "" ] ]
Without discourse connectives, classifying implicit discourse relations is a challenging task and a bottleneck for building a practical discourse parser. Previous research usually makes use of one kind of discourse framework such as PDTB or RST to improve the classification performance on discourse relations. Actually, under different discourse annotation frameworks, there exist multiple corpora which have internal connections. To exploit the combination of different discourse corpora, we design related discourse classification tasks specific to a corpus, and propose a novel Convolutional Neural Network embedded multi-task learning system to synthesize these tasks by learning both unique and shared representations for each task. The experimental results on the PDTB implicit discourse relation classification task demonstrate that our model achieves significant gains over baseline systems.
2105.10325
Jana Kierdorf
Jana Kierdorf, Immanuel Weber, Anna Kicherer, Laura Zabawa, Lukas Drees, Ribana Roscher
Behind the leaves -- Estimation of occluded grapevine berries with conditional generative adversarial networks
45 pages, 18 figures, 1 table
null
10.3389/frai.2022.830026
null
cs.CV cs.LG cs.NE
http://creativecommons.org/licenses/by/4.0/
The need for accurate yield estimates for viticulture is becoming more important due to increasing competition in the wine market worldwide. One of the most promising methods to estimate the harvest is berry counting, as it can be approached non-destructively, and its process can be automated. In this article, we present a method that addresses the challenge of occluded berries with leaves to obtain a more accurate estimate of the number of berries that will enable a better estimate of the harvest. We use generative adversarial networks, a deep learning-based approach that generates a likely scenario behind the leaves exploiting learned patterns from images with non-occluded berries. Our experiments show that the estimate of the number of berries after applying our method is closer to the manually counted reference. In contrast to applying a factor to the berry count, our approach better adapts to local conditions by directly involving the appearance of the visible berries. Furthermore, we show that our approach can identify which areas in the image should be changed by adding new berries without explicitly requiring information about hidden areas.
[ { "created": "Fri, 21 May 2021 12:57:48 GMT", "version": "v1" } ]
2022-03-28
[ [ "Kierdorf", "Jana", "" ], [ "Weber", "Immanuel", "" ], [ "Kicherer", "Anna", "" ], [ "Zabawa", "Laura", "" ], [ "Drees", "Lukas", "" ], [ "Roscher", "Ribana", "" ] ]
The need for accurate yield estimates for viticulture is becoming more important due to increasing competition in the wine market worldwide. One of the most promising methods to estimate the harvest is berry counting, as it can be approached non-destructively, and its process can be automated. In this article, we present a method that addresses the challenge of occluded berries with leaves to obtain a more accurate estimate of the number of berries that will enable a better estimate of the harvest. We use generative adversarial networks, a deep learning-based approach that generates a likely scenario behind the leaves exploiting learned patterns from images with non-occluded berries. Our experiments show that the estimate of the number of berries after applying our method is closer to the manually counted reference. In contrast to applying a factor to the berry count, our approach better adapts to local conditions by directly involving the appearance of the visible berries. Furthermore, we show that our approach can identify which areas in the image should be changed by adding new berries without explicitly requiring information about hidden areas.
2102.04621
Xinchen Liu
Jinkai Zheng, Xinchen Liu, Chenggang Yan, Jiyong Zhang, Wu Liu, Xiaoping Zhang, Tao Mei
TraND: Transferable Neighborhood Discovery for Unsupervised Cross-domain Gait Recognition
Accepted by ISCAS 2021. 5 pages, 2 figures
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Gait, i.e., the movement pattern of human limbs during locomotion, is a promising biometric for the identification of persons. Despite significant improvement in gait recognition with deep learning, existing studies still neglect a more practical but challenging scenario -- unsupervised cross-domain gait recognition which aims to learn a model on a labeled dataset then adapts it to an unlabeled dataset. Due to the domain shift and class gap, directly applying a model trained on one source dataset to other target datasets usually obtains very poor results. Therefore, this paper proposes a Transferable Neighborhood Discovery (TraND) framework to bridge the domain gap for unsupervised cross-domain gait recognition. To learn effective prior knowledge for gait representation, we first adopt a backbone network pre-trained on the labeled source data in a supervised manner. Then we design an end-to-end trainable approach to automatically discover the confident neighborhoods of unlabeled samples in the latent space. During training, the class consistency indicator is adopted to select confident neighborhoods of samples based on their entropy measurements. Moreover, we explore a high-entropy-first neighbor selection strategy, which can effectively transfer prior knowledge to the target domain. Our method achieves state-of-the-art results on two public datasets, i.e., CASIA-B and OU-LP.
[ { "created": "Tue, 9 Feb 2021 03:07:07 GMT", "version": "v1" } ]
2021-02-10
[ [ "Zheng", "Jinkai", "" ], [ "Liu", "Xinchen", "" ], [ "Yan", "Chenggang", "" ], [ "Zhang", "Jiyong", "" ], [ "Liu", "Wu", "" ], [ "Zhang", "Xiaoping", "" ], [ "Mei", "Tao", "" ] ]
Gait, i.e., the movement pattern of human limbs during locomotion, is a promising biometric for the identification of persons. Despite significant improvement in gait recognition with deep learning, existing studies still neglect a more practical but challenging scenario -- unsupervised cross-domain gait recognition which aims to learn a model on a labeled dataset then adapts it to an unlabeled dataset. Due to the domain shift and class gap, directly applying a model trained on one source dataset to other target datasets usually obtains very poor results. Therefore, this paper proposes a Transferable Neighborhood Discovery (TraND) framework to bridge the domain gap for unsupervised cross-domain gait recognition. To learn effective prior knowledge for gait representation, we first adopt a backbone network pre-trained on the labeled source data in a supervised manner. Then we design an end-to-end trainable approach to automatically discover the confident neighborhoods of unlabeled samples in the latent space. During training, the class consistency indicator is adopted to select confident neighborhoods of samples based on their entropy measurements. Moreover, we explore a high-entropy-first neighbor selection strategy, which can effectively transfer prior knowledge to the target domain. Our method achieves state-of-the-art results on two public datasets, i.e., CASIA-B and OU-LP.
1105.2934
Ludo Waltman
Ludo Waltman, Nees Jan van Eck and Anthony F.J. van Raan
Universality of citation distributions revisited
null
null
null
null
cs.DL physics.data-an physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Radicchi, Fortunato, and Castellano [arXiv:0806.0974, PNAS 105(45), 17268] claim that, apart from a scaling factor, all fields of science are characterized by the same citation distribution. We present a large-scale validation study of this universality-of-citation-distributions claim. Our analysis shows that claiming citation distributions to be universal for all fields of science is not warranted. Although many fields indeed seem to have fairly similar citation distributions, there are quite some exceptions as well. We also briefly discuss the consequences of our findings for the measurement of scientific impact using citation-based bibliometric indicators.
[ { "created": "Sun, 15 May 2011 09:03:04 GMT", "version": "v1" }, { "created": "Mon, 25 Jul 2011 21:28:10 GMT", "version": "v2" }, { "created": "Tue, 30 Aug 2011 17:19:16 GMT", "version": "v3" } ]
2011-08-31
[ [ "Waltman", "Ludo", "" ], [ "van Eck", "Nees Jan", "" ], [ "van Raan", "Anthony F. J.", "" ] ]
Radicchi, Fortunato, and Castellano [arXiv:0806.0974, PNAS 105(45), 17268] claim that, apart from a scaling factor, all fields of science are characterized by the same citation distribution. We present a large-scale validation study of this universality-of-citation-distributions claim. Our analysis shows that claiming citation distributions to be universal for all fields of science is not warranted. Although many fields indeed seem to have fairly similar citation distributions, there are quite some exceptions as well. We also briefly discuss the consequences of our findings for the measurement of scientific impact using citation-based bibliometric indicators.
1906.07523
Emre Yilmaz
Emre Y{\i}lmaz, Samuel Cohen, Xianghu Yue, David van Leeuwen, Haizhou Li
Multi-Graph Decoding for Code-Switching ASR
Accepted for publication at Interspeech 2019
null
null
null
cs.CL cs.SD eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the FAME! Project, a code-switching (CS) automatic speech recognition (ASR) system for Frisian-Dutch speech is developed that can accurately transcribe the local broadcaster's bilingual archives with CS speech. This archive contains recordings with monolingual Frisian and Dutch speech segments as well as Frisian-Dutch CS speech, hence the recognition performance on monolingual segments is also vital for accurate transcriptions. In this work, we propose a multi-graph decoding and rescoring strategy using bilingual and monolingual graphs together with a unified acoustic model for CS ASR. The proposed decoding scheme gives the freedom to design and employ alternative search spaces for each (monolingual or bilingual) recognition task and enables the effective use of monolingual resources of the high-resourced mixed language in low-resourced CS scenarios. In our scenario, Dutch is the high-resourced and Frisian is the low-resourced language. We therefore use additional monolingual Dutch text resources to improve the Dutch language model (LM) and compare the performance of single- and multi-graph CS ASR systems on Dutch segments using larger Dutch LMs. The ASR results show that the proposed approach outperforms baseline single-graph CS ASR systems, providing better performance on the monolingual Dutch segments without any accuracy loss on monolingual Frisian and code-mixed segments.
[ { "created": "Tue, 18 Jun 2019 12:24:32 GMT", "version": "v1" }, { "created": "Fri, 28 Jun 2019 07:07:08 GMT", "version": "v2" } ]
2019-07-01
[ [ "Yılmaz", "Emre", "" ], [ "Cohen", "Samuel", "" ], [ "Yue", "Xianghu", "" ], [ "van Leeuwen", "David", "" ], [ "Li", "Haizhou", "" ] ]
In the FAME! Project, a code-switching (CS) automatic speech recognition (ASR) system for Frisian-Dutch speech is developed that can accurately transcribe the local broadcaster's bilingual archives with CS speech. This archive contains recordings with monolingual Frisian and Dutch speech segments as well as Frisian-Dutch CS speech, hence the recognition performance on monolingual segments is also vital for accurate transcriptions. In this work, we propose a multi-graph decoding and rescoring strategy using bilingual and monolingual graphs together with a unified acoustic model for CS ASR. The proposed decoding scheme gives the freedom to design and employ alternative search spaces for each (monolingual or bilingual) recognition task and enables the effective use of monolingual resources of the high-resourced mixed language in low-resourced CS scenarios. In our scenario, Dutch is the high-resourced and Frisian is the low-resourced language. We therefore use additional monolingual Dutch text resources to improve the Dutch language model (LM) and compare the performance of single- and multi-graph CS ASR systems on Dutch segments using larger Dutch LMs. The ASR results show that the proposed approach outperforms baseline single-graph CS ASR systems, providing better performance on the monolingual Dutch segments without any accuracy loss on monolingual Frisian and code-mixed segments.
1708.09597
Cunxi Yu
Cunxi Yu, Mihir Choudhury, Andrew Sullivan, Maciej Ciesielski
Advanced Datapath Synthesis using Graph Isomorphism
6 pages, 8 figures. To appear in 2017 IEEE/ACM International Conference on Computer-Aided Design (ICCAD'17)
null
null
null
cs.AR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents an advanced DAG-based algorithm for datapath synthesis that targets area minimization using logic-level resource sharing. The problem of identifying common specification logic is formulated using unweighted graph isomorphism problem, in contrast to a weighted graph isomorphism using AIGs. In the context of gate-level datapath circuits, our algorithm solves the un- weighted graph isomorphism problem in linear time. The experiments are conducted within an industrial synthesis flow that includes the complete high-level synthesis, logic synthesis and placement and route procedures. Experimental results show a significant runtime improvements compared to the existing datapath synthesis algorithms.
[ { "created": "Thu, 31 Aug 2017 07:34:00 GMT", "version": "v1" } ]
2017-09-01
[ [ "Yu", "Cunxi", "" ], [ "Choudhury", "Mihir", "" ], [ "Sullivan", "Andrew", "" ], [ "Ciesielski", "Maciej", "" ] ]
This paper presents an advanced DAG-based algorithm for datapath synthesis that targets area minimization using logic-level resource sharing. The problem of identifying common specification logic is formulated using unweighted graph isomorphism problem, in contrast to a weighted graph isomorphism using AIGs. In the context of gate-level datapath circuits, our algorithm solves the un- weighted graph isomorphism problem in linear time. The experiments are conducted within an industrial synthesis flow that includes the complete high-level synthesis, logic synthesis and placement and route procedures. Experimental results show a significant runtime improvements compared to the existing datapath synthesis algorithms.
1802.05568
Bin Guo
Yi Ouyang, Bin Guo, Xinjiang Lu, Qi Han, Tong Guo, Zhiwen Yu
CompetitiveBike: Competitive Prediction of Bike-Sharing Apps Using Heterogeneous Crowdsourced Data
null
null
null
null
cs.HC cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In recent years, bike-sharing systems have been deployed in many cities, which provide an economical lifestyle. With the prevalence of bike-sharing systems, a lot of companies join the market, leading to increasingly fierce competition. To be competitive, bike-sharing companies and app developers need to make strategic decisions for mobile apps development. Therefore, it is significant to predict and compare the popularity of different bike-sharing apps. However, existing works mostly focus on predicting the popularity of a single app, the popularity contest among different apps has not been explored yet. In this paper, we aim to forecast the popularity contest between Mobike and Ofo, two most popular bike-sharing apps in China. We develop CompetitiveBike, a system to predict the popularity contest among bike-sharing apps. Moreover, we conduct experiments on real-world datasets collected from 11 app stores and Sina Weibo, and the experiments demonstrate the effectiveness of our approach.
[ { "created": "Thu, 15 Feb 2018 14:36:09 GMT", "version": "v1" } ]
2018-02-16
[ [ "Ouyang", "Yi", "" ], [ "Guo", "Bin", "" ], [ "Lu", "Xinjiang", "" ], [ "Han", "Qi", "" ], [ "Guo", "Tong", "" ], [ "Yu", "Zhiwen", "" ] ]
In recent years, bike-sharing systems have been deployed in many cities, which provide an economical lifestyle. With the prevalence of bike-sharing systems, a lot of companies join the market, leading to increasingly fierce competition. To be competitive, bike-sharing companies and app developers need to make strategic decisions for mobile apps development. Therefore, it is significant to predict and compare the popularity of different bike-sharing apps. However, existing works mostly focus on predicting the popularity of a single app, the popularity contest among different apps has not been explored yet. In this paper, we aim to forecast the popularity contest between Mobike and Ofo, two most popular bike-sharing apps in China. We develop CompetitiveBike, a system to predict the popularity contest among bike-sharing apps. Moreover, we conduct experiments on real-world datasets collected from 11 app stores and Sina Weibo, and the experiments demonstrate the effectiveness of our approach.
2402.16194
Omama Hamad
Omama Hamad, Ali Hamdi, Khaled Shaban
ASEM: Enhancing Empathy in Chatbot through Attention-based Sentiment and Emotion Modeling
Accepted to the LREC-COLING 2024
null
null
null
cs.CL
http://creativecommons.org/licenses/by-sa/4.0/
Effective feature representations play a critical role in enhancing the performance of text generation models that rely on deep neural networks. However, current approaches suffer from several drawbacks, such as the inability to capture the deep semantics of language and sensitivity to minor input variations, resulting in significant changes in the generated text. In this paper, we present a novel solution to these challenges by employing a mixture of experts, multiple encoders, to offer distinct perspectives on the emotional state of the user's utterance while simultaneously enhancing performance. We propose an end-to-end model architecture called ASEM that performs emotion analysis on top of sentiment analysis for open-domain chatbots, enabling the generation of empathetic responses that are fluent and relevant. In contrast to traditional attention mechanisms, the proposed model employs a specialized attention strategy that uniquely zeroes in on sentiment and emotion nuances within the user's utterance. This ensures the generation of context-rich representations tailored to the underlying emotional tone and sentiment intricacies of the text. Our approach outperforms existing methods for generating empathetic embeddings, providing empathetic and diverse responses. The performance of our proposed model significantly exceeds that of existing models, enhancing emotion detection accuracy by 6.2% and lexical diversity by 1.4%.
[ { "created": "Sun, 25 Feb 2024 20:36:51 GMT", "version": "v1" } ]
2024-02-27
[ [ "Hamad", "Omama", "" ], [ "Hamdi", "Ali", "" ], [ "Shaban", "Khaled", "" ] ]
Effective feature representations play a critical role in enhancing the performance of text generation models that rely on deep neural networks. However, current approaches suffer from several drawbacks, such as the inability to capture the deep semantics of language and sensitivity to minor input variations, resulting in significant changes in the generated text. In this paper, we present a novel solution to these challenges by employing a mixture of experts, multiple encoders, to offer distinct perspectives on the emotional state of the user's utterance while simultaneously enhancing performance. We propose an end-to-end model architecture called ASEM that performs emotion analysis on top of sentiment analysis for open-domain chatbots, enabling the generation of empathetic responses that are fluent and relevant. In contrast to traditional attention mechanisms, the proposed model employs a specialized attention strategy that uniquely zeroes in on sentiment and emotion nuances within the user's utterance. This ensures the generation of context-rich representations tailored to the underlying emotional tone and sentiment intricacies of the text. Our approach outperforms existing methods for generating empathetic embeddings, providing empathetic and diverse responses. The performance of our proposed model significantly exceeds that of existing models, enhancing emotion detection accuracy by 6.2% and lexical diversity by 1.4%.
1801.07555
Hongkai Wen
Yiran Shen, Fengyuan Yang, Bowen Du, Weitao Xu, Chengwen Luo, Hongkai Wen
Shake-n-Shack: Enabling Secure Data Exchange Between Smart Wearables via Handshakes
To appear in PerCom'18
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Since ancient Greece, handshaking has been commonly practiced between two people as a friendly gesture to express trust and respect, or form a mutual agreement. In this paper, we show that such physical contact can be used to bootstrap secure cyber contact between the smart devices worn by users. The key observation is that during handshaking, although belonged to two different users, the two hands involved in the shaking events are often rigidly connected, and therefore exhibit very similar motion patterns. We propose a novel Shake-n-Shack system, which harvests motion data during user handshaking from the wrist worn smart devices such as smartwatches or fitness bands, and exploits the matching motion patterns to generate symmetric keys on both parties. The generated keys can be then used to establish a secure communication channel for exchanging data between devices. This provides a much more natural and user-friendly alternative for many applications, e.g. exchanging/sharing contact details, friending on social networks, or even making payments, since it doesn't involve extra bespoke hardware, nor require the users to perform pre-defined gestures. We implement the proposed Shake-n-Shack system on off-the-shelf smartwatches, and extensive evaluation shows that it can reliably generate 128-bit symmetric keys just after around 1s of handshaking (with success rate >99%), and is resilient to real-time mimicking attacks: in our experiments the Equal Error Rate (EER) is only 1.6% on average. We also show that the proposed Shake-n-Shack system can be extremely lightweight, and is able to run in-situ on the resource-constrained smartwatches without incurring excessive resource consumption.
[ { "created": "Tue, 23 Jan 2018 14:23:13 GMT", "version": "v1" } ]
2018-01-24
[ [ "Shen", "Yiran", "" ], [ "Yang", "Fengyuan", "" ], [ "Du", "Bowen", "" ], [ "Xu", "Weitao", "" ], [ "Luo", "Chengwen", "" ], [ "Wen", "Hongkai", "" ] ]
Since ancient Greece, handshaking has been commonly practiced between two people as a friendly gesture to express trust and respect, or form a mutual agreement. In this paper, we show that such physical contact can be used to bootstrap secure cyber contact between the smart devices worn by users. The key observation is that during handshaking, although belonged to two different users, the two hands involved in the shaking events are often rigidly connected, and therefore exhibit very similar motion patterns. We propose a novel Shake-n-Shack system, which harvests motion data during user handshaking from the wrist worn smart devices such as smartwatches or fitness bands, and exploits the matching motion patterns to generate symmetric keys on both parties. The generated keys can be then used to establish a secure communication channel for exchanging data between devices. This provides a much more natural and user-friendly alternative for many applications, e.g. exchanging/sharing contact details, friending on social networks, or even making payments, since it doesn't involve extra bespoke hardware, nor require the users to perform pre-defined gestures. We implement the proposed Shake-n-Shack system on off-the-shelf smartwatches, and extensive evaluation shows that it can reliably generate 128-bit symmetric keys just after around 1s of handshaking (with success rate >99%), and is resilient to real-time mimicking attacks: in our experiments the Equal Error Rate (EER) is only 1.6% on average. We also show that the proposed Shake-n-Shack system can be extremely lightweight, and is able to run in-situ on the resource-constrained smartwatches without incurring excessive resource consumption.
2011.13495
Zhizhong Han
Baorui Ma and Zhizhong Han and Yu-Shen Liu and Matthias Zwicker
Neural-Pull: Learning Signed Distance Functions from Point Clouds by Learning to Pull Space onto Surfaces
To appear at ICML2021. Code and data are available at https://github.com/mabaorui/NeuralPull
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Reconstructing continuous surfaces from 3D point clouds is a fundamental operation in 3D geometry processing. Several recent state-of-the-art methods address this problem using neural networks to learn signed distance functions (SDFs). In this paper, we introduce \textit{Neural-Pull}, a new approach that is simple and leads to high quality SDFs. Specifically, we train a neural network to pull query 3D locations to their closest points on the surface using the predicted signed distance values and the gradient at the query locations, both of which are computed by the network itself. The pulling operation moves each query location with a stride given by the distance predicted by the network. Based on the sign of the distance, this may move the query location along or against the direction of the gradient of the SDF. This is a differentiable operation that allows us to update the signed distance value and the gradient simultaneously during training. Our outperforming results under widely used benchmarks demonstrate that we can learn SDFs more accurately and flexibly for surface reconstruction and single image reconstruction than the state-of-the-art methods.
[ { "created": "Thu, 26 Nov 2020 23:18:10 GMT", "version": "v1" }, { "created": "Sun, 23 May 2021 17:54:34 GMT", "version": "v2" } ]
2021-05-25
[ [ "Ma", "Baorui", "" ], [ "Han", "Zhizhong", "" ], [ "Liu", "Yu-Shen", "" ], [ "Zwicker", "Matthias", "" ] ]
Reconstructing continuous surfaces from 3D point clouds is a fundamental operation in 3D geometry processing. Several recent state-of-the-art methods address this problem using neural networks to learn signed distance functions (SDFs). In this paper, we introduce \textit{Neural-Pull}, a new approach that is simple and leads to high quality SDFs. Specifically, we train a neural network to pull query 3D locations to their closest points on the surface using the predicted signed distance values and the gradient at the query locations, both of which are computed by the network itself. The pulling operation moves each query location with a stride given by the distance predicted by the network. Based on the sign of the distance, this may move the query location along or against the direction of the gradient of the SDF. This is a differentiable operation that allows us to update the signed distance value and the gradient simultaneously during training. Our outperforming results under widely used benchmarks demonstrate that we can learn SDFs more accurately and flexibly for surface reconstruction and single image reconstruction than the state-of-the-art methods.
1002.3187
Seyed Hamed Hassani
S. Hamed Hassani, Kasra Alishahi, Rudiger Urbanke
On the scaling of Polar Codes: II. The behavior of un-polarized channels
Submitted to ISIT 2010
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We provide upper and lower bounds on the escape rate of the Bhattacharyya process corresponding to polar codes and transmission over the the binary erasure channel. More precisely, we bound the exponent of the number of sub-channels whose Bhattacharyya constant falls in a fixed interval $[a,b]$. Mathematically this can be stated as bounding the limit $\lim_{n \to \infty} \frac{1}{n} \ln \mathbb{P}(Z_n \in [a,b])$, where $Z_n$ is the Bhattacharyya process. The quantity $\mathbb{P}(Z_n \in [a,b])$ represents the fraction of sub-channels that are still un-polarized at time $n$.
[ { "created": "Wed, 17 Feb 2010 03:55:40 GMT", "version": "v1" }, { "created": "Thu, 18 Feb 2010 07:54:04 GMT", "version": "v2" } ]
2010-02-18
[ [ "Hassani", "S. Hamed", "" ], [ "Alishahi", "Kasra", "" ], [ "Urbanke", "Rudiger", "" ] ]
We provide upper and lower bounds on the escape rate of the Bhattacharyya process corresponding to polar codes and transmission over the the binary erasure channel. More precisely, we bound the exponent of the number of sub-channels whose Bhattacharyya constant falls in a fixed interval $[a,b]$. Mathematically this can be stated as bounding the limit $\lim_{n \to \infty} \frac{1}{n} \ln \mathbb{P}(Z_n \in [a,b])$, where $Z_n$ is the Bhattacharyya process. The quantity $\mathbb{P}(Z_n \in [a,b])$ represents the fraction of sub-channels that are still un-polarized at time $n$.
1709.07758
Farhana Ferdousi Liza
Farhana Ferdousi Liza and Marek Grzes
Improving Language Modelling with Noise-contrastive estimation
null
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Neural language models do not scale well when the vocabulary is large. Noise-contrastive estimation (NCE) is a sampling-based method that allows for fast learning with large vocabularies. Although NCE has shown promising performance in neural machine translation, it was considered to be an unsuccessful approach for language modelling. A sufficient investigation of the hyperparameters in the NCE-based neural language models was also missing. In this paper, we showed that NCE can be a successful approach in neural language modelling when the hyperparameters of a neural network are tuned appropriately. We introduced the 'search-then-converge' learning rate schedule for NCE and designed a heuristic that specifies how to use this schedule. The impact of the other important hyperparameters, such as the dropout rate and the weight initialisation range, was also demonstrated. We showed that appropriate tuning of NCE-based neural language models outperforms the state-of-the-art single-model methods on a popular benchmark.
[ { "created": "Fri, 22 Sep 2017 13:59:17 GMT", "version": "v1" } ]
2017-09-25
[ [ "Liza", "Farhana Ferdousi", "" ], [ "Grzes", "Marek", "" ] ]
Neural language models do not scale well when the vocabulary is large. Noise-contrastive estimation (NCE) is a sampling-based method that allows for fast learning with large vocabularies. Although NCE has shown promising performance in neural machine translation, it was considered to be an unsuccessful approach for language modelling. A sufficient investigation of the hyperparameters in the NCE-based neural language models was also missing. In this paper, we showed that NCE can be a successful approach in neural language modelling when the hyperparameters of a neural network are tuned appropriately. We introduced the 'search-then-converge' learning rate schedule for NCE and designed a heuristic that specifies how to use this schedule. The impact of the other important hyperparameters, such as the dropout rate and the weight initialisation range, was also demonstrated. We showed that appropriate tuning of NCE-based neural language models outperforms the state-of-the-art single-model methods on a popular benchmark.
1711.06616
Omid Haji Maghsoudi
Omid Haji Maghsoudi
Superpixels Based Segmentation and SVM Based Classification Method to Distinguish Five Diseases from Normal Regions in Wireless Capsule Endoscopy
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Wireless Capsule Endoscopy (WCE) is relatively a new technology to examine the entire GI trace. During an examination, it captures more than 55,000 frames. Reviewing all these images is time-consuming and prone to human error. It has been a challenge to develop intelligent methods assisting physicians to review the frames. The WCE frames are captured in 8-bit color depths which provides enough a color range to detect abnormalities. Here, superpixel based methods are proposed to segment five diseases including: bleeding, Crohn's disease, Lymphangiectasia, Xanthoma, and Lymphoid hyperplasia. Two superpixels methods are compared to provide semantic segmentation of these prolific diseases: simple linear iterative clustering (SLIC) and quick shift (QS). The segmented superpixels were classified into two classes (normal and abnormal) by support vector machine (SVM) using texture and color features. For both superpixel methods, the accuracy, specificity, sensitivity, and precision (SLIC, QS) were around 92%, 93%, 93%, and 88%, respectively. However, SLIC was dramatically faster than QS.
[ { "created": "Fri, 17 Nov 2017 16:25:34 GMT", "version": "v1" } ]
2017-11-20
[ [ "Maghsoudi", "Omid Haji", "" ] ]
Wireless Capsule Endoscopy (WCE) is relatively a new technology to examine the entire GI trace. During an examination, it captures more than 55,000 frames. Reviewing all these images is time-consuming and prone to human error. It has been a challenge to develop intelligent methods assisting physicians to review the frames. The WCE frames are captured in 8-bit color depths which provides enough a color range to detect abnormalities. Here, superpixel based methods are proposed to segment five diseases including: bleeding, Crohn's disease, Lymphangiectasia, Xanthoma, and Lymphoid hyperplasia. Two superpixels methods are compared to provide semantic segmentation of these prolific diseases: simple linear iterative clustering (SLIC) and quick shift (QS). The segmented superpixels were classified into two classes (normal and abnormal) by support vector machine (SVM) using texture and color features. For both superpixel methods, the accuracy, specificity, sensitivity, and precision (SLIC, QS) were around 92%, 93%, 93%, and 88%, respectively. However, SLIC was dramatically faster than QS.
2104.13255
Ting-Wu Chin
Ting-Wu Chin, Diana Marculescu, Ari S. Morcos
Width Transfer: On the (In)variance of Width Optimization
Full paper accepted at CVPR Workshops 2021; a 4-page abridged version is accepted at ICLR 2021 NAS Workshop
null
null
null
cs.CV cs.LG
http://creativecommons.org/licenses/by/4.0/
Optimizing the channel counts for different layers of a CNN has shown great promise in improving the efficiency of CNNs at test-time. However, these methods often introduce large computational overhead (e.g., an additional 2x FLOPs of standard training). Minimizing this overhead could therefore significantly speed up training. In this work, we propose width transfer, a technique that harnesses the assumptions that the optimized widths (or channel counts) are regular across sizes and depths. We show that width transfer works well across various width optimization algorithms and networks. Specifically, we can achieve up to 320x reduction in width optimization overhead without compromising the top-1 accuracy on ImageNet, making the additional cost of width optimization negligible relative to initial training. Our findings not only suggest an efficient way to conduct width optimization but also highlight that the widths that lead to better accuracy are invariant to various aspects of network architectures and training data.
[ { "created": "Sat, 24 Apr 2021 19:51:53 GMT", "version": "v1" } ]
2021-04-28
[ [ "Chin", "Ting-Wu", "" ], [ "Marculescu", "Diana", "" ], [ "Morcos", "Ari S.", "" ] ]
Optimizing the channel counts for different layers of a CNN has shown great promise in improving the efficiency of CNNs at test-time. However, these methods often introduce large computational overhead (e.g., an additional 2x FLOPs of standard training). Minimizing this overhead could therefore significantly speed up training. In this work, we propose width transfer, a technique that harnesses the assumptions that the optimized widths (or channel counts) are regular across sizes and depths. We show that width transfer works well across various width optimization algorithms and networks. Specifically, we can achieve up to 320x reduction in width optimization overhead without compromising the top-1 accuracy on ImageNet, making the additional cost of width optimization negligible relative to initial training. Our findings not only suggest an efficient way to conduct width optimization but also highlight that the widths that lead to better accuracy are invariant to various aspects of network architectures and training data.
1704.02958
Arturs Backurs
Arturs Backurs, Piotr Indyk, Ludwig Schmidt
On the Fine-Grained Complexity of Empirical Risk Minimization: Kernel Methods and Neural Networks
null
null
null
null
cs.CC cs.DS cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Empirical risk minimization (ERM) is ubiquitous in machine learning and underlies most supervised learning methods. While there has been a large body of work on algorithms for various ERM problems, the exact computational complexity of ERM is still not understood. We address this issue for multiple popular ERM problems including kernel SVMs, kernel ridge regression, and training the final layer of a neural network. In particular, we give conditional hardness results for these problems based on complexity-theoretic assumptions such as the Strong Exponential Time Hypothesis. Under these assumptions, we show that there are no algorithms that solve the aforementioned ERM problems to high accuracy in sub-quadratic time. We also give similar hardness results for computing the gradient of the empirical loss, which is the main computational burden in many non-convex learning tasks.
[ { "created": "Mon, 10 Apr 2017 17:26:41 GMT", "version": "v1" } ]
2017-04-11
[ [ "Backurs", "Arturs", "" ], [ "Indyk", "Piotr", "" ], [ "Schmidt", "Ludwig", "" ] ]
Empirical risk minimization (ERM) is ubiquitous in machine learning and underlies most supervised learning methods. While there has been a large body of work on algorithms for various ERM problems, the exact computational complexity of ERM is still not understood. We address this issue for multiple popular ERM problems including kernel SVMs, kernel ridge regression, and training the final layer of a neural network. In particular, we give conditional hardness results for these problems based on complexity-theoretic assumptions such as the Strong Exponential Time Hypothesis. Under these assumptions, we show that there are no algorithms that solve the aforementioned ERM problems to high accuracy in sub-quadratic time. We also give similar hardness results for computing the gradient of the empirical loss, which is the main computational burden in many non-convex learning tasks.
2003.14058
Yuan Gao
Yuan Gao, Haoping Bai, Zequn Jie, Jiayi Ma, Kui Jia, and Wei Liu
MTL-NAS: Task-Agnostic Neural Architecture Search towards General-Purpose Multi-Task Learning
Accepted to CVPR2020. The first two authors contribute equally
IEEE Conference on Computer Vision and Pattern Recognition, 2020
null
null
cs.LG cs.CV stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose to incorporate neural architecture search (NAS) into general-purpose multi-task learning (GP-MTL). Existing NAS methods typically define different search spaces according to different tasks. In order to adapt to different task combinations (i.e., task sets), we disentangle the GP-MTL networks into single-task backbones (optionally encode the task priors), and a hierarchical and layerwise features sharing/fusing scheme across them. This enables us to design a novel and general task-agnostic search space, which inserts cross-task edges (i.e., feature fusion connections) into fixed single-task network backbones. Moreover, we also propose a novel single-shot gradient-based search algorithm that closes the performance gap between the searched architectures and the final evaluation architecture. This is realized with a minimum entropy regularization on the architecture weights during the search phase, which makes the architecture weights converge to near-discrete values and therefore achieves a single model. As a result, our searched model can be directly used for evaluation without (re-)training from scratch. We perform extensive experiments using different single-task backbones on various task sets, demonstrating the promising performance obtained by exploiting the hierarchical and layerwise features, as well as the desirable generalizability to different i) task sets and ii) single-task backbones. The code of our paper is available at https://github.com/bhpfelix/MTLNAS.
[ { "created": "Tue, 31 Mar 2020 09:49:14 GMT", "version": "v1" } ]
2020-04-01
[ [ "Gao", "Yuan", "" ], [ "Bai", "Haoping", "" ], [ "Jie", "Zequn", "" ], [ "Ma", "Jiayi", "" ], [ "Jia", "Kui", "" ], [ "Liu", "Wei", "" ] ]
We propose to incorporate neural architecture search (NAS) into general-purpose multi-task learning (GP-MTL). Existing NAS methods typically define different search spaces according to different tasks. In order to adapt to different task combinations (i.e., task sets), we disentangle the GP-MTL networks into single-task backbones (optionally encode the task priors), and a hierarchical and layerwise features sharing/fusing scheme across them. This enables us to design a novel and general task-agnostic search space, which inserts cross-task edges (i.e., feature fusion connections) into fixed single-task network backbones. Moreover, we also propose a novel single-shot gradient-based search algorithm that closes the performance gap between the searched architectures and the final evaluation architecture. This is realized with a minimum entropy regularization on the architecture weights during the search phase, which makes the architecture weights converge to near-discrete values and therefore achieves a single model. As a result, our searched model can be directly used for evaluation without (re-)training from scratch. We perform extensive experiments using different single-task backbones on various task sets, demonstrating the promising performance obtained by exploiting the hierarchical and layerwise features, as well as the desirable generalizability to different i) task sets and ii) single-task backbones. The code of our paper is available at https://github.com/bhpfelix/MTLNAS.
1203.2511
Victor Seal
Victor Seal, Arnab Raha, Shovan Maity, Souvik Kr Mitra, Amitava Mukherjee and Mrinal Kanti Naskar
A Simple Flood Forecasting Scheme Using Wireless Sensor Networks
16 pages, 4 figures, published in International Journal Of Ad-Hoc, Sensor And Ubiquitous Computing, February 2012; V. seal et al, 'A Simple Flood Forecasting Scheme Using Wireless Sensor Networks', IJASUC, Feb.2012
null
10.5121/ijasuc.2012.3105
null
cs.LG cs.CE cs.NI cs.SY stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents a forecasting model designed using WSNs (Wireless Sensor Networks) to predict flood in rivers using simple and fast calculations to provide real-time results and save the lives of people who may be affected by the flood. Our prediction model uses multiple variable robust linear regression which is easy to understand and simple and cost effective in implementation, is speed efficient, but has low resource utilization and yet provides real time predictions with reliable accuracy, thus having features which are desirable in any real world algorithm. Our prediction model is independent of the number of parameters, i.e. any number of parameters may be added or removed based on the on-site requirements. When the water level rises, we represent it using a polynomial whose nature is used to determine if the water level may exceed the flood line in the near future. We compare our work with a contemporary algorithm to demonstrate our improvements over it. Then we present our simulation results for the predicted water level compared to the actual water level.
[ { "created": "Fri, 9 Mar 2012 18:08:34 GMT", "version": "v1" } ]
2012-03-13
[ [ "Seal", "Victor", "" ], [ "Raha", "Arnab", "" ], [ "Maity", "Shovan", "" ], [ "Mitra", "Souvik Kr", "" ], [ "Mukherjee", "Amitava", "" ], [ "Naskar", "Mrinal Kanti", "" ] ]
This paper presents a forecasting model designed using WSNs (Wireless Sensor Networks) to predict flood in rivers using simple and fast calculations to provide real-time results and save the lives of people who may be affected by the flood. Our prediction model uses multiple variable robust linear regression which is easy to understand and simple and cost effective in implementation, is speed efficient, but has low resource utilization and yet provides real time predictions with reliable accuracy, thus having features which are desirable in any real world algorithm. Our prediction model is independent of the number of parameters, i.e. any number of parameters may be added or removed based on the on-site requirements. When the water level rises, we represent it using a polynomial whose nature is used to determine if the water level may exceed the flood line in the near future. We compare our work with a contemporary algorithm to demonstrate our improvements over it. Then we present our simulation results for the predicted water level compared to the actual water level.
1412.8185
Yuliya Boyarinova
Yakiv O. Kalinovsky, Yuliya E. Boyarinova, Alina S. Turenko, Yana V. Khitsko
Generalized quaternions and their relations with Grassmann-Clifford procedure of doubling
arXiv admin note: substantial text overlap with arXiv:1409.3193
null
null
null
cs.NA math.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The class of non-commutative hypercomplex number systems (HNS) of 4-dimension, constructed by using of non-commutative Grassmann-Clifford procedure of doubling of 2-dimensional systems is investigated in the article and established here are their relationships with the generalized quaternions. Algorithms of performance of operations and methods of algebraic characteristics calculation in them, such as conjugation, normalization, a type of zero divisors are investigated. The considered arithmetic and algebraic operations and procedures in this class HNS allow to use these HNS in mathematical modeling.
[ { "created": "Sun, 28 Dec 2014 16:44:30 GMT", "version": "v1" } ]
2014-12-30
[ [ "Kalinovsky", "Yakiv O.", "" ], [ "Boyarinova", "Yuliya E.", "" ], [ "Turenko", "Alina S.", "" ], [ "Khitsko", "Yana V.", "" ] ]
The class of non-commutative hypercomplex number systems (HNS) of 4-dimension, constructed by using of non-commutative Grassmann-Clifford procedure of doubling of 2-dimensional systems is investigated in the article and established here are their relationships with the generalized quaternions. Algorithms of performance of operations and methods of algebraic characteristics calculation in them, such as conjugation, normalization, a type of zero divisors are investigated. The considered arithmetic and algebraic operations and procedures in this class HNS allow to use these HNS in mathematical modeling.
2309.06877
Xinyang Yu
Zhenguang Liu, Xinyang Yu, Ruili Wang, Shuai Ye, Zhe Ma, Jianfeng Dong, Sifeng He, Feng Qian, Xiaobo Zhang, Roger Zimmermann, Lei Yang
Video Infringement Detection via Feature Disentanglement and Mutual Information Maximization
This paper is accepted by ACM MM 2023
null
null
null
cs.CV cs.MM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The self-media era provides us tremendous high quality videos. Unfortunately, frequent video copyright infringements are now seriously damaging the interests and enthusiasm of video creators. Identifying infringing videos is therefore a compelling task. Current state-of-the-art methods tend to simply feed high-dimensional mixed video features into deep neural networks and count on the networks to extract useful representations. Despite its simplicity, this paradigm heavily relies on the original entangled features and lacks constraints guaranteeing that useful task-relevant semantics are extracted from the features. In this paper, we seek to tackle the above challenges from two aspects: (1) We propose to disentangle an original high-dimensional feature into multiple sub-features, explicitly disentangling the feature into exclusive lower-dimensional components. We expect the sub-features to encode non-overlapping semantics of the original feature and remove redundant information. (2) On top of the disentangled sub-features, we further learn an auxiliary feature to enhance the sub-features. We theoretically analyzed the mutual information between the label and the disentangled features, arriving at a loss that maximizes the extraction of task-relevant information from the original feature. Extensive experiments on two large-scale benchmark datasets (i.e., SVD and VCSL) demonstrate that our method achieves 90.1% TOP-100 mAP on the large-scale SVD dataset and also sets the new state-of-the-art on the VCSL benchmark dataset. Our code and model have been released at https://github.com/yyyooooo/DMI/, hoping to contribute to the community.
[ { "created": "Wed, 13 Sep 2023 10:53:12 GMT", "version": "v1" } ]
2023-09-14
[ [ "Liu", "Zhenguang", "" ], [ "Yu", "Xinyang", "" ], [ "Wang", "Ruili", "" ], [ "Ye", "Shuai", "" ], [ "Ma", "Zhe", "" ], [ "Dong", "Jianfeng", "" ], [ "He", "Sifeng", "" ], [ "Qian", "Feng", "" ], [ "Zhang", "Xiaobo", "" ], [ "Zimmermann", "Roger", "" ], [ "Yang", "Lei", "" ] ]
The self-media era provides us tremendous high quality videos. Unfortunately, frequent video copyright infringements are now seriously damaging the interests and enthusiasm of video creators. Identifying infringing videos is therefore a compelling task. Current state-of-the-art methods tend to simply feed high-dimensional mixed video features into deep neural networks and count on the networks to extract useful representations. Despite its simplicity, this paradigm heavily relies on the original entangled features and lacks constraints guaranteeing that useful task-relevant semantics are extracted from the features. In this paper, we seek to tackle the above challenges from two aspects: (1) We propose to disentangle an original high-dimensional feature into multiple sub-features, explicitly disentangling the feature into exclusive lower-dimensional components. We expect the sub-features to encode non-overlapping semantics of the original feature and remove redundant information. (2) On top of the disentangled sub-features, we further learn an auxiliary feature to enhance the sub-features. We theoretically analyzed the mutual information between the label and the disentangled features, arriving at a loss that maximizes the extraction of task-relevant information from the original feature. Extensive experiments on two large-scale benchmark datasets (i.e., SVD and VCSL) demonstrate that our method achieves 90.1% TOP-100 mAP on the large-scale SVD dataset and also sets the new state-of-the-art on the VCSL benchmark dataset. Our code and model have been released at https://github.com/yyyooooo/DMI/, hoping to contribute to the community.
1610.07563
Jinbo Bi
Xin Wang, Jinbo Bi, Shipeng Yu, Jiangwen Sun
On Multiplicative Multitask Feature Learning
Advances in Neural Information Processing Systems 2014
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We investigate a general framework of multiplicative multitask feature learning which decomposes each task's model parameters into a multiplication of two components. One of the components is used across all tasks and the other component is task-specific. Several previous methods have been proposed as special cases of our framework. We study the theoretical properties of this framework when different regularization conditions are applied to the two decomposed components. We prove that this framework is mathematically equivalent to the widely used multitask feature learning methods that are based on a joint regularization of all model parameters, but with a more general form of regularizers. Further, an analytical formula is derived for the across-task component as related to the task-specific component for all these regularizers, leading to a better understanding of the shrinkage effect. Study of this framework motivates new multitask learning algorithms. We propose two new learning formulations by varying the parameters in the proposed framework. Empirical studies have revealed the relative advantages of the two new formulations by comparing with the state of the art, which provides instructive insights into the feature learning problem with multiple tasks.
[ { "created": "Mon, 24 Oct 2016 19:27:52 GMT", "version": "v1" } ]
2016-10-25
[ [ "Wang", "Xin", "" ], [ "Bi", "Jinbo", "" ], [ "Yu", "Shipeng", "" ], [ "Sun", "Jiangwen", "" ] ]
We investigate a general framework of multiplicative multitask feature learning which decomposes each task's model parameters into a multiplication of two components. One of the components is used across all tasks and the other component is task-specific. Several previous methods have been proposed as special cases of our framework. We study the theoretical properties of this framework when different regularization conditions are applied to the two decomposed components. We prove that this framework is mathematically equivalent to the widely used multitask feature learning methods that are based on a joint regularization of all model parameters, but with a more general form of regularizers. Further, an analytical formula is derived for the across-task component as related to the task-specific component for all these regularizers, leading to a better understanding of the shrinkage effect. Study of this framework motivates new multitask learning algorithms. We propose two new learning formulations by varying the parameters in the proposed framework. Empirical studies have revealed the relative advantages of the two new formulations by comparing with the state of the art, which provides instructive insights into the feature learning problem with multiple tasks.
2106.02320
Gengwei Zhang
Gengwei Zhang, Guoliang Kang, Yi Yang, Yunchao Wei
Few-Shot Segmentation via Cycle-Consistent Transformer
Advances in Neural Information Processing Systems (NeurIPS), 2021. Project: https://github.com/GengDavid/CyCTR
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Few-shot segmentation aims to train a segmentation model that can fast adapt to novel classes with few exemplars. The conventional training paradigm is to learn to make predictions on query images conditioned on the features from support images. Previous methods only utilized the semantic-level prototypes of support images as conditional information. These methods cannot utilize all pixel-wise support information for the query predictions, which is however critical for the segmentation task. In this paper, we focus on utilizing pixel-wise relationships between support and query images to facilitate the few-shot segmentation task. We design a novel Cycle-Consistent TRansformer (CyCTR) module to aggregate pixel-wise support features into query ones. CyCTR performs cross-attention between features from different images, i.e. support and query images. We observe that there may exist unexpected irrelevant pixel-level support features. Directly performing cross-attention may aggregate these features from support to query and bias the query features. Thus, we propose using a novel cycle-consistent attention mechanism to filter out possible harmful support features and encourage query features to attend to the most informative pixels from support images. Experiments on all few-shot segmentation benchmarks demonstrate that our proposed CyCTR leads to remarkable improvement compared to previous state-of-the-art methods. Specifically, on Pascal-$5^i$ and COCO-$20^i$ datasets, we achieve 67.5% and 45.6% mIoU for 5-shot segmentation, outperforming previous state-of-the-art methods by 5.6% and 7.1% respectively.
[ { "created": "Fri, 4 Jun 2021 07:57:48 GMT", "version": "v1" }, { "created": "Wed, 20 Oct 2021 11:50:27 GMT", "version": "v2" }, { "created": "Tue, 21 Dec 2021 07:24:53 GMT", "version": "v3" }, { "created": "Tue, 8 Mar 2022 00:20:03 GMT", "version": "v4" } ]
2022-03-09
[ [ "Zhang", "Gengwei", "" ], [ "Kang", "Guoliang", "" ], [ "Yang", "Yi", "" ], [ "Wei", "Yunchao", "" ] ]
Few-shot segmentation aims to train a segmentation model that can fast adapt to novel classes with few exemplars. The conventional training paradigm is to learn to make predictions on query images conditioned on the features from support images. Previous methods only utilized the semantic-level prototypes of support images as conditional information. These methods cannot utilize all pixel-wise support information for the query predictions, which is however critical for the segmentation task. In this paper, we focus on utilizing pixel-wise relationships between support and query images to facilitate the few-shot segmentation task. We design a novel Cycle-Consistent TRansformer (CyCTR) module to aggregate pixel-wise support features into query ones. CyCTR performs cross-attention between features from different images, i.e. support and query images. We observe that there may exist unexpected irrelevant pixel-level support features. Directly performing cross-attention may aggregate these features from support to query and bias the query features. Thus, we propose using a novel cycle-consistent attention mechanism to filter out possible harmful support features and encourage query features to attend to the most informative pixels from support images. Experiments on all few-shot segmentation benchmarks demonstrate that our proposed CyCTR leads to remarkable improvement compared to previous state-of-the-art methods. Specifically, on Pascal-$5^i$ and COCO-$20^i$ datasets, we achieve 67.5% and 45.6% mIoU for 5-shot segmentation, outperforming previous state-of-the-art methods by 5.6% and 7.1% respectively.
1204.5431
Mohammad Tofighi
Mohammad Tofighi and Hashem Kalbkhani and Mahrokh G. Shayesteh and Mehdi Ghasemzadeh
Robust Head Pose Estimation Using Contourlet Transform
5 pages, conference paper
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Estimating pose of the head is an important preprocessing step in many pattern recognition and computer vision systems such as face recognition. Since the performance of the face recognition systems is greatly affected by the poses of the face, how to estimate the accurate pose of the face in human face image is still a challenging problem. In this paper, we represent a novel method for head pose estimation. To enhance the efficiency of the estimation we use contourlet transform for feature extraction. Contourlet transform is multi-resolution, multi-direction transform. In order to reduce the feature space dimension and obtain appropriate features we use LDA (Linear Discriminant Analysis) and PCA (Principal Component Analysis) to remove ineffcient features. Then, we apply different classifiers such as k-nearest neighborhood (knn) and minimum distance. We use the public available FERET database to evaluate the performance of proposed method. Simulation results indicate the superior robustness of the proposed method.
[ { "created": "Tue, 24 Apr 2012 17:08:04 GMT", "version": "v1" }, { "created": "Sat, 12 May 2012 13:56:32 GMT", "version": "v2" } ]
2012-05-15
[ [ "Tofighi", "Mohammad", "" ], [ "Kalbkhani", "Hashem", "" ], [ "Shayesteh", "Mahrokh G.", "" ], [ "Ghasemzadeh", "Mehdi", "" ] ]
Estimating pose of the head is an important preprocessing step in many pattern recognition and computer vision systems such as face recognition. Since the performance of the face recognition systems is greatly affected by the poses of the face, how to estimate the accurate pose of the face in human face image is still a challenging problem. In this paper, we represent a novel method for head pose estimation. To enhance the efficiency of the estimation we use contourlet transform for feature extraction. Contourlet transform is multi-resolution, multi-direction transform. In order to reduce the feature space dimension and obtain appropriate features we use LDA (Linear Discriminant Analysis) and PCA (Principal Component Analysis) to remove ineffcient features. Then, we apply different classifiers such as k-nearest neighborhood (knn) and minimum distance. We use the public available FERET database to evaluate the performance of proposed method. Simulation results indicate the superior robustness of the proposed method.
2401.00547
Avvaru Ch Madhusudanarao
A Ch Madhusudanarao, Rahul Singh
On Learning for Ambiguous Chance Constrained Problems
We have "not considered the uniform bound" for violation probabilities corresponding to the set of distributions in the ambiguity set
null
null
null
cs.LG math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study chance constrained optimization problems $\min_x f(x)$ s.t. $P(\left\{ \theta: g(x,\theta)\le 0 \right\})\ge 1-\epsilon$ where $\epsilon\in (0,1)$ is the violation probability, when the distribution $P$ is not known to the decision maker (DM). When the DM has access to a set of distributions $\mathcal{U}$ such that $P$ is contained in $\mathcal{U}$, then the problem is known as the ambiguous chance-constrained problem \cite{erdougan2006ambiguous}. We study ambiguous chance-constrained problem for the case when $\mathcal{U}$ is of the form $\left\{\mu:\frac{\mu (y)}{\nu(y)}\leq C, \forall y\in\Theta, \mu(y)\ge 0\right\}$, where $\nu$ is a ``reference distribution.'' We show that in this case the original problem can be ``well-approximated'' by a sampled problem in which $N$ i.i.d. samples of $\theta$ are drawn from $\nu$, and the original constraint is replaced with $g(x,\theta_i)\le 0,~i=1,2,\ldots,N$. We also derive the sample complexity associated with this approximation, i.e., for $\epsilon,\delta>0$ the number of samples which must be drawn from $\nu$ so that with a probability greater than $1-\delta$ (over the randomness of $\nu$), the solution obtained by solving the sampled program yields an $\epsilon$-feasible solution for the original chance constrained problem.
[ { "created": "Sun, 31 Dec 2023 17:25:43 GMT", "version": "v1" }, { "created": "Sun, 11 Feb 2024 06:07:17 GMT", "version": "v2" } ]
2024-02-13
[ [ "Madhusudanarao", "A Ch", "" ], [ "Singh", "Rahul", "" ] ]
We study chance constrained optimization problems $\min_x f(x)$ s.t. $P(\left\{ \theta: g(x,\theta)\le 0 \right\})\ge 1-\epsilon$ where $\epsilon\in (0,1)$ is the violation probability, when the distribution $P$ is not known to the decision maker (DM). When the DM has access to a set of distributions $\mathcal{U}$ such that $P$ is contained in $\mathcal{U}$, then the problem is known as the ambiguous chance-constrained problem \cite{erdougan2006ambiguous}. We study ambiguous chance-constrained problem for the case when $\mathcal{U}$ is of the form $\left\{\mu:\frac{\mu (y)}{\nu(y)}\leq C, \forall y\in\Theta, \mu(y)\ge 0\right\}$, where $\nu$ is a ``reference distribution.'' We show that in this case the original problem can be ``well-approximated'' by a sampled problem in which $N$ i.i.d. samples of $\theta$ are drawn from $\nu$, and the original constraint is replaced with $g(x,\theta_i)\le 0,~i=1,2,\ldots,N$. We also derive the sample complexity associated with this approximation, i.e., for $\epsilon,\delta>0$ the number of samples which must be drawn from $\nu$ so that with a probability greater than $1-\delta$ (over the randomness of $\nu$), the solution obtained by solving the sampled program yields an $\epsilon$-feasible solution for the original chance constrained problem.
2101.00790
Amir K. Khandani Dr.
Amir K. Khandani
Achieving Capacity Region of 2-users Weak GIC by Enlarging the Core in a Nested Set of Polymatroids (continuation of arXiv:2012.07820 "Optimality of Gaussian in Enlarging HK Rate Region, and its Overlap with ...")
20 pages, 4 figures
null
null
null
cs.IT math.CO math.IT
http://creativecommons.org/publicdomain/zero/1.0/
This article shows that achieving capacity region of a 2-users weak Gaussian Interference Channel (GIC) is equivalent to enlarging the core in a nested set of Polymatroids (each equivalent to capacity region of a multiple-access channel) through maximizing a minimum rate, then projecting along its orthogonal span and continuing recursively. This formulation relies on defining dummy private messages to capture the effect of interference in GIC. It follows that relying on independent Gaussian random code-books is optimum, and the corresponding solution corresponds to achieving the boundary in HK constraints.
[ { "created": "Mon, 4 Jan 2021 06:07:56 GMT", "version": "v1" }, { "created": "Wed, 20 Jan 2021 18:45:29 GMT", "version": "v2" }, { "created": "Mon, 25 Jan 2021 05:38:25 GMT", "version": "v3" }, { "created": "Thu, 28 Jan 2021 15:58:16 GMT", "version": "v4" }, { "created": "Mon, 1 Feb 2021 23:33:22 GMT", "version": "v5" } ]
2021-02-03
[ [ "Khandani", "Amir K.", "" ] ]
This article shows that achieving capacity region of a 2-users weak Gaussian Interference Channel (GIC) is equivalent to enlarging the core in a nested set of Polymatroids (each equivalent to capacity region of a multiple-access channel) through maximizing a minimum rate, then projecting along its orthogonal span and continuing recursively. This formulation relies on defining dummy private messages to capture the effect of interference in GIC. It follows that relying on independent Gaussian random code-books is optimum, and the corresponding solution corresponds to achieving the boundary in HK constraints.
2109.13325
Chen Quan
Chen Quan, Baocheng Geng, Yunghsiang S. Han and Pramod K. Varshney
Enhanced Audit Bit Based Distributed Bayesian Detection in the Presence of Strategic Attacks
null
null
null
null
cs.CR eess.SP
http://creativecommons.org/licenses/by-nc-sa/4.0/
This paper employs an audit bit based mechanism to mitigate the effect of Byzantine attacks. In this framework, the optimal attacking strategy for intelligent attackers is investigated for the traditional audit bit based scheme (TAS) to evaluate the robustness of the system. We show that it is possible for an intelligent attacker to degrade the performance of TAS to the system without audit bits. To enhance the robustness of the system in the presence of intelligent attackers, we propose an enhanced audit bit based scheme (EAS). The optimal fusion rule for the proposed scheme is derived and the detection performance of the system is evaluated via the probability of error for the system. Simulation results show that the proposed EAS improves the robustness and the detection performance of the system. Moreover, based on EAS, another new scheme called the reduced audit bit based scheme (RAS) is proposed which further improves system performance. We derive the new optimal fusion rule and the simulation results show that RAS outperforms EAS and TAS in terms of both robustness and detection performance of the system. Then, we extend the proposed RAS for a wide-area cluster based distributed wireless sensor networks (CWSNs). Simulation results show that the proposed RAS significantly reduces the communication overhead between the sensors and the FC, which prolongs the lifetime of the network.
[ { "created": "Mon, 27 Sep 2021 19:58:26 GMT", "version": "v1" } ]
2021-09-29
[ [ "Quan", "Chen", "" ], [ "Geng", "Baocheng", "" ], [ "Han", "Yunghsiang S.", "" ], [ "Varshney", "Pramod K.", "" ] ]
This paper employs an audit bit based mechanism to mitigate the effect of Byzantine attacks. In this framework, the optimal attacking strategy for intelligent attackers is investigated for the traditional audit bit based scheme (TAS) to evaluate the robustness of the system. We show that it is possible for an intelligent attacker to degrade the performance of TAS to the system without audit bits. To enhance the robustness of the system in the presence of intelligent attackers, we propose an enhanced audit bit based scheme (EAS). The optimal fusion rule for the proposed scheme is derived and the detection performance of the system is evaluated via the probability of error for the system. Simulation results show that the proposed EAS improves the robustness and the detection performance of the system. Moreover, based on EAS, another new scheme called the reduced audit bit based scheme (RAS) is proposed which further improves system performance. We derive the new optimal fusion rule and the simulation results show that RAS outperforms EAS and TAS in terms of both robustness and detection performance of the system. Then, we extend the proposed RAS for a wide-area cluster based distributed wireless sensor networks (CWSNs). Simulation results show that the proposed RAS significantly reduces the communication overhead between the sensors and the FC, which prolongs the lifetime of the network.
2206.05257
Kamran Alipour
Kamran Alipour, Aditya Lahiri, Ehsan Adeli, Babak Salimi, Michael Pazzani
Explaining Image Classifiers Using Contrastive Counterfactuals in Generative Latent Spaces
null
null
null
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Despite their high accuracies, modern complex image classifiers cannot be trusted for sensitive tasks due to their unknown decision-making process and potential biases. Counterfactual explanations are very effective in providing transparency for these black-box algorithms. Nevertheless, generating counterfactuals that can have a consistent impact on classifier outputs and yet expose interpretable feature changes is a very challenging task. We introduce a novel method to generate causal and yet interpretable counterfactual explanations for image classifiers using pretrained generative models without any re-training or conditioning. The generative models in this technique are not bound to be trained on the same data as the target classifier. We use this framework to obtain contrastive and causal sufficiency and necessity scores as global explanations for black-box classifiers. On the task of face attribute classification, we show how different attributes influence the classifier output by providing both causal and contrastive feature attributions, and the corresponding counterfactual images.
[ { "created": "Fri, 10 Jun 2022 17:54:46 GMT", "version": "v1" } ]
2022-06-13
[ [ "Alipour", "Kamran", "" ], [ "Lahiri", "Aditya", "" ], [ "Adeli", "Ehsan", "" ], [ "Salimi", "Babak", "" ], [ "Pazzani", "Michael", "" ] ]
Despite their high accuracies, modern complex image classifiers cannot be trusted for sensitive tasks due to their unknown decision-making process and potential biases. Counterfactual explanations are very effective in providing transparency for these black-box algorithms. Nevertheless, generating counterfactuals that can have a consistent impact on classifier outputs and yet expose interpretable feature changes is a very challenging task. We introduce a novel method to generate causal and yet interpretable counterfactual explanations for image classifiers using pretrained generative models without any re-training or conditioning. The generative models in this technique are not bound to be trained on the same data as the target classifier. We use this framework to obtain contrastive and causal sufficiency and necessity scores as global explanations for black-box classifiers. On the task of face attribute classification, we show how different attributes influence the classifier output by providing both causal and contrastive feature attributions, and the corresponding counterfactual images.
2001.04767
Ulderico Fugacci
Ulderico Fugacci, Claudia Landi, Hanife Varl{\i}
Critical Sets of PL and Discrete Morse Theory: a Correspondence
In this version, we have fixed some minor typos
null
null
null
cs.CG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Piecewise-linear (PL) Morse theory and discrete Morse theory are used in shape analysis tasks to investigate the topological features of discretized spaces. In spite of their common origin in smooth Morse theory, various notions of critical points have been given in the literature for the discrete setting, making a clear understanding of the relationships occurring between them not obvious. This paper aims at providing equivalence results about critical points of the two discretized Morse theories. First of all, we prove the equivalence of the existing notions of PL critical points. Next, under an optimality condition called relative perfectness, we show a dimension agnostic correspondence between the set of PL critical points and that of discrete critical simplices of the combinatorial approach. Finally, we show how a relatively perfect discrete gradient vector field can be algorithmically built up to dimension 3. This way, we guarantee a formal and operative connection between critical sets in the PL and discrete theories.
[ { "created": "Tue, 14 Jan 2020 13:34:19 GMT", "version": "v1" }, { "created": "Sun, 19 Jan 2020 09:16:18 GMT", "version": "v2" }, { "created": "Fri, 8 May 2020 12:15:32 GMT", "version": "v3" }, { "created": "Mon, 18 May 2020 16:38:15 GMT", "version": "v4" } ]
2020-05-19
[ [ "Fugacci", "Ulderico", "" ], [ "Landi", "Claudia", "" ], [ "Varlı", "Hanife", "" ] ]
Piecewise-linear (PL) Morse theory and discrete Morse theory are used in shape analysis tasks to investigate the topological features of discretized spaces. In spite of their common origin in smooth Morse theory, various notions of critical points have been given in the literature for the discrete setting, making a clear understanding of the relationships occurring between them not obvious. This paper aims at providing equivalence results about critical points of the two discretized Morse theories. First of all, we prove the equivalence of the existing notions of PL critical points. Next, under an optimality condition called relative perfectness, we show a dimension agnostic correspondence between the set of PL critical points and that of discrete critical simplices of the combinatorial approach. Finally, we show how a relatively perfect discrete gradient vector field can be algorithmically built up to dimension 3. This way, we guarantee a formal and operative connection between critical sets in the PL and discrete theories.
2006.14117
Shuai Zhang
Shuai Zhang, Meng Wang, Sijia Liu, Pin-Yu Chen, Jinjun Xiong
Fast Learning of Graph Neural Networks with Guaranteed Generalizability: One-hidden-layer Case
null
International Conference on Machine Learning (ICML 2020)
null
null
cs.LG eess.SP math.OC stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Although graph neural networks (GNNs) have made great progress recently on learning from graph-structured data in practice, their theoretical guarantee on generalizability remains elusive in the literature. In this paper, we provide a theoretically-grounded generalizability analysis of GNNs with one hidden layer for both regression and binary classification problems. Under the assumption that there exists a ground-truth GNN model (with zero generalization error), the objective of GNN learning is to estimate the ground-truth GNN parameters from the training data. To achieve this objective, we propose a learning algorithm that is built on tensor initialization and accelerated gradient descent. We then show that the proposed learning algorithm converges to the ground-truth GNN model for the regression problem, and to a model sufficiently close to the ground-truth for the binary classification problem. Moreover, for both cases, the convergence rate of the proposed learning algorithm is proven to be linear and faster than the vanilla gradient descent algorithm. We further explore the relationship between the sample complexity of GNNs and their underlying graph properties. Lastly, we provide numerical experiments to demonstrate the validity of our analysis and the effectiveness of the proposed learning algorithm for GNNs.
[ { "created": "Thu, 25 Jun 2020 00:45:52 GMT", "version": "v1" } ]
2020-06-26
[ [ "Zhang", "Shuai", "" ], [ "Wang", "Meng", "" ], [ "Liu", "Sijia", "" ], [ "Chen", "Pin-Yu", "" ], [ "Xiong", "Jinjun", "" ] ]
Although graph neural networks (GNNs) have made great progress recently on learning from graph-structured data in practice, their theoretical guarantee on generalizability remains elusive in the literature. In this paper, we provide a theoretically-grounded generalizability analysis of GNNs with one hidden layer for both regression and binary classification problems. Under the assumption that there exists a ground-truth GNN model (with zero generalization error), the objective of GNN learning is to estimate the ground-truth GNN parameters from the training data. To achieve this objective, we propose a learning algorithm that is built on tensor initialization and accelerated gradient descent. We then show that the proposed learning algorithm converges to the ground-truth GNN model for the regression problem, and to a model sufficiently close to the ground-truth for the binary classification problem. Moreover, for both cases, the convergence rate of the proposed learning algorithm is proven to be linear and faster than the vanilla gradient descent algorithm. We further explore the relationship between the sample complexity of GNNs and their underlying graph properties. Lastly, we provide numerical experiments to demonstrate the validity of our analysis and the effectiveness of the proposed learning algorithm for GNNs.
2209.13429
Yongchan Kwon
Yongchan Kwon, James Zou
WeightedSHAP: analyzing and improving Shapley based feature attributions
null
NeurIPS2022
null
null
cs.LG
http://creativecommons.org/licenses/by-sa/4.0/
Shapley value is a popular approach for measuring the influence of individual features. While Shapley feature attribution is built upon desiderata from game theory, some of its constraints may be less natural in certain machine learning settings, leading to unintuitive model interpretation. In particular, the Shapley value uses the same weight for all marginal contributions -- i.e. it gives the same importance when a large number of other features are given versus when a small number of other features are given. This property can be problematic if larger feature sets are more or less informative than smaller feature sets. Our work performs a rigorous analysis of the potential limitations of Shapley feature attribution. We identify simple settings where the Shapley value is mathematically suboptimal by assigning larger attributions for less influential features. Motivated by this observation, we propose WeightedSHAP, which generalizes the Shapley value and learns which marginal contributions to focus directly from data. On several real-world datasets, we demonstrate that the influential features identified by WeightedSHAP are better able to recapitulate the model's predictions compared to the features identified by the Shapley value.
[ { "created": "Tue, 27 Sep 2022 14:34:07 GMT", "version": "v1" } ]
2022-09-28
[ [ "Kwon", "Yongchan", "" ], [ "Zou", "James", "" ] ]
Shapley value is a popular approach for measuring the influence of individual features. While Shapley feature attribution is built upon desiderata from game theory, some of its constraints may be less natural in certain machine learning settings, leading to unintuitive model interpretation. In particular, the Shapley value uses the same weight for all marginal contributions -- i.e. it gives the same importance when a large number of other features are given versus when a small number of other features are given. This property can be problematic if larger feature sets are more or less informative than smaller feature sets. Our work performs a rigorous analysis of the potential limitations of Shapley feature attribution. We identify simple settings where the Shapley value is mathematically suboptimal by assigning larger attributions for less influential features. Motivated by this observation, we propose WeightedSHAP, which generalizes the Shapley value and learns which marginal contributions to focus directly from data. On several real-world datasets, we demonstrate that the influential features identified by WeightedSHAP are better able to recapitulate the model's predictions compared to the features identified by the Shapley value.
1709.00596
D\"om\"ot\"or P\'alv\"olgyi
D\"om\"ot\"or P\'alv\"olgyi
Complexity of Domination in Triangulated Plane Graphs
null
null
null
null
cs.CC math.CO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We prove that for a triangulated plane graph it is NP-complete to determine its domination number and its power domination number.
[ { "created": "Sat, 2 Sep 2017 15:21:07 GMT", "version": "v1" } ]
2017-09-05
[ [ "Pálvölgyi", "Dömötör", "" ] ]
We prove that for a triangulated plane graph it is NP-complete to determine its domination number and its power domination number.
1509.08086
Arvind Kumar
Arvind Kumar, Adarsh Anand, Pankaj Kumar Garg and Mohini Agarwal
Optimal Release Time Decision from Fuzzy Mathematical Programming Perspective
10 Pages. arXiv admin note: substantial overlap with text by other authors http://archive.org/stream/Software_Reliability_Assessment_with_OR_Applications/Software_Reliability_Assessment_with_OR_Applications_djvu.txt
International Journal of Pure and Applied Mathematics, Volume 103 No. 2 2015, 359-376
10.12732/ijpam.v103i2.19
null
cs.AI math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Demand for high software reliability requires rigorous testing followed by requirement of robust modeling techniques for software quality prediction. On one side, firms have to steadily manage the reliability by testing it vigorously, the optimal release time determination is their biggest concern. In past many models have been developed and much research has been devoted towards assessment of release time of software. However, majority of the work deals in crisp study. This paper addresses the problem of release time prediction using fuzzy Logic. Here we have formulated a Fuzzy release time problem considering the cost of testing under the impact of warranty period. Results show that fuzzy model has good adaptability.
[ { "created": "Sun, 27 Sep 2015 11:41:05 GMT", "version": "v1" } ]
2015-09-30
[ [ "Kumar", "Arvind", "" ], [ "Anand", "Adarsh", "" ], [ "Garg", "Pankaj Kumar", "" ], [ "Agarwal", "Mohini", "" ] ]
Demand for high software reliability requires rigorous testing followed by requirement of robust modeling techniques for software quality prediction. On one side, firms have to steadily manage the reliability by testing it vigorously, the optimal release time determination is their biggest concern. In past many models have been developed and much research has been devoted towards assessment of release time of software. However, majority of the work deals in crisp study. This paper addresses the problem of release time prediction using fuzzy Logic. Here we have formulated a Fuzzy release time problem considering the cost of testing under the impact of warranty period. Results show that fuzzy model has good adaptability.
2204.14213
Dallas Card
Junshen K. Chen and Dallas Card and Dan Jurafsky
Modular Domain Adaptation
Findings of ACL (2022)
null
null
null
cs.CL cs.LG
http://creativecommons.org/licenses/by-sa/4.0/
Off-the-shelf models are widely used by computational social science researchers to measure properties of text, such as sentiment. However, without access to source data it is difficult to account for domain shift, which represents a threat to validity. Here, we treat domain adaptation as a modular process that involves separate model producers and model consumers, and show how they can independently cooperate to facilitate more accurate measurements of text. We introduce two lightweight techniques for this scenario, and demonstrate that they reliably increase out-of-domain accuracy on four multi-domain text classification datasets when used with linear and contextual embedding models. We conclude with recommendations for model producers and consumers, and release models and replication code to accompany this paper.
[ { "created": "Tue, 26 Apr 2022 22:08:58 GMT", "version": "v1" } ]
2022-05-02
[ [ "Chen", "Junshen K.", "" ], [ "Card", "Dallas", "" ], [ "Jurafsky", "Dan", "" ] ]
Off-the-shelf models are widely used by computational social science researchers to measure properties of text, such as sentiment. However, without access to source data it is difficult to account for domain shift, which represents a threat to validity. Here, we treat domain adaptation as a modular process that involves separate model producers and model consumers, and show how they can independently cooperate to facilitate more accurate measurements of text. We introduce two lightweight techniques for this scenario, and demonstrate that they reliably increase out-of-domain accuracy on four multi-domain text classification datasets when used with linear and contextual embedding models. We conclude with recommendations for model producers and consumers, and release models and replication code to accompany this paper.
2402.05467
Guangyu Shen
Guangyu Shen, Siyuan Cheng, Kaiyuan Zhang, Guanhong Tao, Shengwei An, Lu Yan, Zhuo Zhang, Shiqing Ma, Xiangyu Zhang
Rapid Optimization for Jailbreaking LLMs via Subconscious Exploitation and Echopraxia
null
null
null
null
cs.AI cs.CL cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Large Language Models (LLMs) have become prevalent across diverse sectors, transforming human life with their extraordinary reasoning and comprehension abilities. As they find increased use in sensitive tasks, safety concerns have gained widespread attention. Extensive efforts have been dedicated to aligning LLMs with human moral principles to ensure their safe deployment. Despite their potential, recent research indicates aligned LLMs are prone to specialized jailbreaking prompts that bypass safety measures to elicit violent and harmful content. The intrinsic discrete nature and substantial scale of contemporary LLMs pose significant challenges in automatically generating diverse, efficient, and potent jailbreaking prompts, representing a continuous obstacle. In this paper, we introduce RIPPLE (Rapid Optimization via Subconscious Exploitation and Echopraxia), a novel optimization-based method inspired by two psychological concepts: subconsciousness and echopraxia, which describe the processes of the mind that occur without conscious awareness and the involuntary mimicry of actions, respectively. Evaluations across 6 open-source LLMs and 4 commercial LLM APIs show RIPPLE achieves an average Attack Success Rate of 91.5\%, outperforming five current methods by up to 47.0\% with an 8x reduction in overhead. Furthermore, it displays significant transferability and stealth, successfully evading established detection mechanisms. The code of our work is available at \url{https://github.com/SolidShen/RIPPLE_official/tree/official}
[ { "created": "Thu, 8 Feb 2024 07:56:49 GMT", "version": "v1" } ]
2024-02-09
[ [ "Shen", "Guangyu", "" ], [ "Cheng", "Siyuan", "" ], [ "Zhang", "Kaiyuan", "" ], [ "Tao", "Guanhong", "" ], [ "An", "Shengwei", "" ], [ "Yan", "Lu", "" ], [ "Zhang", "Zhuo", "" ], [ "Ma", "Shiqing", "" ], [ "Zhang", "Xiangyu", "" ] ]
Large Language Models (LLMs) have become prevalent across diverse sectors, transforming human life with their extraordinary reasoning and comprehension abilities. As they find increased use in sensitive tasks, safety concerns have gained widespread attention. Extensive efforts have been dedicated to aligning LLMs with human moral principles to ensure their safe deployment. Despite their potential, recent research indicates aligned LLMs are prone to specialized jailbreaking prompts that bypass safety measures to elicit violent and harmful content. The intrinsic discrete nature and substantial scale of contemporary LLMs pose significant challenges in automatically generating diverse, efficient, and potent jailbreaking prompts, representing a continuous obstacle. In this paper, we introduce RIPPLE (Rapid Optimization via Subconscious Exploitation and Echopraxia), a novel optimization-based method inspired by two psychological concepts: subconsciousness and echopraxia, which describe the processes of the mind that occur without conscious awareness and the involuntary mimicry of actions, respectively. Evaluations across 6 open-source LLMs and 4 commercial LLM APIs show RIPPLE achieves an average Attack Success Rate of 91.5\%, outperforming five current methods by up to 47.0\% with an 8x reduction in overhead. Furthermore, it displays significant transferability and stealth, successfully evading established detection mechanisms. The code of our work is available at \url{https://github.com/SolidShen/RIPPLE_official/tree/official}
2407.06293
Xin Liu
Xin Liu, Xingchen Liu, Paul Witherell
A Framework for Simulating the Path-level Residual Stress in the Laser Powder Bed Fusion Process
null
null
null
null
cs.CE physics.app-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Laser Powder Bed Fusion (LPBF) additive manufacturing has revolutionized industries with its capability to create intricate and customized components. The LPBF process uses moving heat sources to melt and solidify metal powders. The fast melting and cooling leads to residual stress, which critically affects the part quality. Currently, the computational intensity of accurately simulating the residual stress on the path scale remains a significant challenge, limiting our understanding of the LPBF processes. This paper presents a framework for simulating the LPBF process residual stress based on the path-level thermal history. Compared with the existing approaches, the path-level simulation requires discretization only to capture the scanning path rather than the details of the melt pools, thus requiring less dense mesh and is more computationally efficient. We develop this framework by introducing a new concept termed effective thermal strain to capture the anisotropic thermal strain near and around the melt pool. We validate our approach with the high-fidelity results from the literature. We use the proposed approach to simulate various single-island scanning patterns and layers with multiple full and trimmed islands. We further investigate the influence of the path-level thermal history and the layer shape on the residual stress by analyzing their simulation results.
[ { "created": "Wed, 10 Apr 2024 17:28:43 GMT", "version": "v1" } ]
2024-07-10
[ [ "Liu", "Xin", "" ], [ "Liu", "Xingchen", "" ], [ "Witherell", "Paul", "" ] ]
Laser Powder Bed Fusion (LPBF) additive manufacturing has revolutionized industries with its capability to create intricate and customized components. The LPBF process uses moving heat sources to melt and solidify metal powders. The fast melting and cooling leads to residual stress, which critically affects the part quality. Currently, the computational intensity of accurately simulating the residual stress on the path scale remains a significant challenge, limiting our understanding of the LPBF processes. This paper presents a framework for simulating the LPBF process residual stress based on the path-level thermal history. Compared with the existing approaches, the path-level simulation requires discretization only to capture the scanning path rather than the details of the melt pools, thus requiring less dense mesh and is more computationally efficient. We develop this framework by introducing a new concept termed effective thermal strain to capture the anisotropic thermal strain near and around the melt pool. We validate our approach with the high-fidelity results from the literature. We use the proposed approach to simulate various single-island scanning patterns and layers with multiple full and trimmed islands. We further investigate the influence of the path-level thermal history and the layer shape on the residual stress by analyzing their simulation results.
2405.18483
Mengyi Shan
Mengyi Shan, Lu Dong, Yutao Han, Yuan Yao, Tao Liu, Ifeoma Nwogu, Guo-Jun Qi, and Mitch Hill
Towards Open Domain Text-Driven Synthesis of Multi-Person Motions
ECCV 2024. Project page: https://shanmy.github.io/Multi-Motion/
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
This work aims to generate natural and diverse group motions of multiple humans from textual descriptions. While single-person text-to-motion generation is extensively studied, it remains challenging to synthesize motions for more than one or two subjects from in-the-wild prompts, mainly due to the lack of available datasets. In this work, we curate human pose and motion datasets by estimating pose information from large-scale image and video datasets. Our models use a transformer-based diffusion framework that accommodates multiple datasets with any number of subjects or frames. Experiments explore both generation of multi-person static poses and generation of multi-person motion sequences. To our knowledge, our method is the first to generate multi-subject motion sequences with high diversity and fidelity from a large variety of textual prompts.
[ { "created": "Tue, 28 May 2024 18:00:06 GMT", "version": "v1" }, { "created": "Mon, 15 Jul 2024 07:55:43 GMT", "version": "v2" } ]
2024-07-16
[ [ "Shan", "Mengyi", "" ], [ "Dong", "Lu", "" ], [ "Han", "Yutao", "" ], [ "Yao", "Yuan", "" ], [ "Liu", "Tao", "" ], [ "Nwogu", "Ifeoma", "" ], [ "Qi", "Guo-Jun", "" ], [ "Hill", "Mitch", "" ] ]
This work aims to generate natural and diverse group motions of multiple humans from textual descriptions. While single-person text-to-motion generation is extensively studied, it remains challenging to synthesize motions for more than one or two subjects from in-the-wild prompts, mainly due to the lack of available datasets. In this work, we curate human pose and motion datasets by estimating pose information from large-scale image and video datasets. Our models use a transformer-based diffusion framework that accommodates multiple datasets with any number of subjects or frames. Experiments explore both generation of multi-person static poses and generation of multi-person motion sequences. To our knowledge, our method is the first to generate multi-subject motion sequences with high diversity and fidelity from a large variety of textual prompts.
2401.09656
Tan Chen
Tan Chen, Jintao Yan, Yuxuan Sun, Sheng Zhou, Deniz G\"und\"uz, Zhisheng Niu
Mobility Accelerates Learning: Convergence Analysis on Hierarchical Federated Learning in Vehicular Networks
Submitted to IEEE for possible publication
null
null
null
cs.LG cs.AI cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Hierarchical federated learning (HFL) enables distributed training of models across multiple devices with the help of several edge servers and a cloud edge server in a privacy-preserving manner. In this paper, we consider HFL with highly mobile devices, mainly targeting at vehicular networks. Through convergence analysis, we show that mobility influences the convergence speed by both fusing the edge data and shuffling the edge models. While mobility is usually considered as a challenge from the perspective of communication, we prove that it increases the convergence speed of HFL with edge-level heterogeneous data, since more diverse data can be incorporated. Furthermore, we demonstrate that a higher speed leads to faster convergence, since it accelerates the fusion of data. Simulation results show that mobility increases the model accuracy of HFL by up to 15.1% when training a convolutional neural network on the CIFAR-10 dataset.
[ { "created": "Thu, 18 Jan 2024 00:09:54 GMT", "version": "v1" } ]
2024-01-19
[ [ "Chen", "Tan", "" ], [ "Yan", "Jintao", "" ], [ "Sun", "Yuxuan", "" ], [ "Zhou", "Sheng", "" ], [ "Gündüz", "Deniz", "" ], [ "Niu", "Zhisheng", "" ] ]
Hierarchical federated learning (HFL) enables distributed training of models across multiple devices with the help of several edge servers and a cloud edge server in a privacy-preserving manner. In this paper, we consider HFL with highly mobile devices, mainly targeting at vehicular networks. Through convergence analysis, we show that mobility influences the convergence speed by both fusing the edge data and shuffling the edge models. While mobility is usually considered as a challenge from the perspective of communication, we prove that it increases the convergence speed of HFL with edge-level heterogeneous data, since more diverse data can be incorporated. Furthermore, we demonstrate that a higher speed leads to faster convergence, since it accelerates the fusion of data. Simulation results show that mobility increases the model accuracy of HFL by up to 15.1% when training a convolutional neural network on the CIFAR-10 dataset.
1006.1382
Majid Fozunbal
Majid Fozunbal
On Regret of Parametric Mismatch in Minimum Mean Square Error Estimation
5 Pages, 2 figures, International Symposium on Information Theory (ISIT), June 2010
null
null
HPL-2010-10
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper studies the effect of parametric mismatch in minimum mean square error (MMSE) estimation. In particular, we consider the problem of estimating the input signal from the output of an additive white Gaussian channel whose gain is fixed, but unknown. The input distribution is known, and the estimation process consists of two algorithms. First, a channel estimator blindly estimates the channel gain using past observations. Second, a mismatched MMSE estimator, optimized for the estimated channel gain, estimates the input signal. We analyze the regret, i.e., the additional mean square error, that is raised in this process. We derive upper-bounds on both absolute and relative regrets. Bounds are expressed in terms of the Fisher information. We also study regret for unbiased, efficient channel estimators, and derive a simple trade-off between Fisher information and relative regret. This trade-off shows that the product of a certain function of relative regret and Fisher information equals the signal-to-noise ratio, independent of the input distribution. The trade-off relation implies that higher Fisher information results to smaller expected relative regret.
[ { "created": "Mon, 7 Jun 2010 21:47:09 GMT", "version": "v1" } ]
2010-06-09
[ [ "Fozunbal", "Majid", "" ] ]
This paper studies the effect of parametric mismatch in minimum mean square error (MMSE) estimation. In particular, we consider the problem of estimating the input signal from the output of an additive white Gaussian channel whose gain is fixed, but unknown. The input distribution is known, and the estimation process consists of two algorithms. First, a channel estimator blindly estimates the channel gain using past observations. Second, a mismatched MMSE estimator, optimized for the estimated channel gain, estimates the input signal. We analyze the regret, i.e., the additional mean square error, that is raised in this process. We derive upper-bounds on both absolute and relative regrets. Bounds are expressed in terms of the Fisher information. We also study regret for unbiased, efficient channel estimators, and derive a simple trade-off between Fisher information and relative regret. This trade-off shows that the product of a certain function of relative regret and Fisher information equals the signal-to-noise ratio, independent of the input distribution. The trade-off relation implies that higher Fisher information results to smaller expected relative regret.
2103.10668
Ramin Shahbazi
Ramin Shahbazi, Rishab Sharma, Fatemeh H. Fard
API2Com: On the Improvement of Automatically Generated Code Comments Using API Documentations
null
null
null
null
cs.SE cs.CL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Code comments can help in program comprehension and are considered as important artifacts to help developers in software maintenance. However, the comments are mostly missing or are outdated, specially in complex software projects. As a result, several automatic comment generation models are developed as a solution. The recent models explore the integration of external knowledge resources such as Unified Modeling Language class diagrams to improve the generated comments. In this paper, we propose API2Com, a model that leverages the Application Programming Interface Documentations (API Docs) as a knowledge resource for comment generation. The API Docs include the description of the methods in more details and therefore, can provide better context in the generated comments. The API Docs are used along with the code snippets and Abstract Syntax Trees in our model. We apply the model on a large Java dataset of over 130,000 methods and evaluate it using both Transformer and RNN-base architectures. Interestingly, when API Docs are used, the performance increase is negligible. We therefore run different experiments to reason about the results. For methods that only contain one API, adding API Docs improves the results by 4% BLEU score on average (BLEU score is an automatic evaluation metric used in machine translation). However, as the number of APIs that are used in a method increases, the performance of the model in generating comments decreases due to long documentations used in the input. Our results confirm that the API Docs can be useful in generating better comments, but, new techniques are required to identify the most informative ones in a method rather than using all documentations simultaneously.
[ { "created": "Fri, 19 Mar 2021 07:29:40 GMT", "version": "v1" } ]
2021-03-22
[ [ "Shahbazi", "Ramin", "" ], [ "Sharma", "Rishab", "" ], [ "Fard", "Fatemeh H.", "" ] ]
Code comments can help in program comprehension and are considered as important artifacts to help developers in software maintenance. However, the comments are mostly missing or are outdated, specially in complex software projects. As a result, several automatic comment generation models are developed as a solution. The recent models explore the integration of external knowledge resources such as Unified Modeling Language class diagrams to improve the generated comments. In this paper, we propose API2Com, a model that leverages the Application Programming Interface Documentations (API Docs) as a knowledge resource for comment generation. The API Docs include the description of the methods in more details and therefore, can provide better context in the generated comments. The API Docs are used along with the code snippets and Abstract Syntax Trees in our model. We apply the model on a large Java dataset of over 130,000 methods and evaluate it using both Transformer and RNN-base architectures. Interestingly, when API Docs are used, the performance increase is negligible. We therefore run different experiments to reason about the results. For methods that only contain one API, adding API Docs improves the results by 4% BLEU score on average (BLEU score is an automatic evaluation metric used in machine translation). However, as the number of APIs that are used in a method increases, the performance of the model in generating comments decreases due to long documentations used in the input. Our results confirm that the API Docs can be useful in generating better comments, but, new techniques are required to identify the most informative ones in a method rather than using all documentations simultaneously.
2103.01843
Nikolaus Demmel
Nikolaus Demmel, Christiane Sommer, Daniel Cremers, Vladyslav Usenko
Square Root Bundle Adjustment for Large-Scale Reconstruction
Accepted to CVPR 2021. Updated version corresponding to CVPR camera-ready. Formatting changes and minor tweaks to fit page requirements
null
10.1109/CVPR46437.2021.01155
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a new formulation for the bundle adjustment problem which relies on nullspace marginalization of landmark variables by QR decomposition. Our approach, which we call square root bundle adjustment, is algebraically equivalent to the commonly used Schur complement trick, improves the numeric stability of computations, and allows for solving large-scale bundle adjustment problems with single-precision floating-point numbers. We show in real-world experiments with the BAL datasets that even in single precision the proposed solver achieves on average equally accurate solutions compared to Schur complement solvers using double precision. It runs significantly faster, but can require larger amounts of memory on dense problems. The proposed formulation relies on simple linear algebra operations and opens the way for efficient implementations of bundle adjustment on hardware platforms optimized for single-precision linear algebra processing.
[ { "created": "Tue, 2 Mar 2021 16:26:20 GMT", "version": "v1" }, { "created": "Tue, 30 Mar 2021 23:50:04 GMT", "version": "v2" } ]
2021-11-23
[ [ "Demmel", "Nikolaus", "" ], [ "Sommer", "Christiane", "" ], [ "Cremers", "Daniel", "" ], [ "Usenko", "Vladyslav", "" ] ]
We propose a new formulation for the bundle adjustment problem which relies on nullspace marginalization of landmark variables by QR decomposition. Our approach, which we call square root bundle adjustment, is algebraically equivalent to the commonly used Schur complement trick, improves the numeric stability of computations, and allows for solving large-scale bundle adjustment problems with single-precision floating-point numbers. We show in real-world experiments with the BAL datasets that even in single precision the proposed solver achieves on average equally accurate solutions compared to Schur complement solvers using double precision. It runs significantly faster, but can require larger amounts of memory on dense problems. The proposed formulation relies on simple linear algebra operations and opens the way for efficient implementations of bundle adjustment on hardware platforms optimized for single-precision linear algebra processing.
1212.6883
Vivek Nittoor
Vivek S. Nittoor, Reiji Suda
Partition Parameters for Girth Maximum (m, r) BTUs
null
null
null
null
cs.DM math.CO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper describes the calculation of the optimal partition parameters such that the girth maximum (m, r) Balanced Tanner Unit lies in family of BTUs specified by them using a series of proved results and thus creates a framework for specifying a search problem for finding the girth maximum (m, r) BTU. Several open questions for girth maximum (m, r) BTU have been raised.
[ { "created": "Mon, 31 Dec 2012 12:53:32 GMT", "version": "v1" }, { "created": "Tue, 22 Jan 2013 15:06:22 GMT", "version": "v2" } ]
2013-01-23
[ [ "Nittoor", "Vivek S.", "" ], [ "Suda", "Reiji", "" ] ]
This paper describes the calculation of the optimal partition parameters such that the girth maximum (m, r) Balanced Tanner Unit lies in family of BTUs specified by them using a series of proved results and thus creates a framework for specifying a search problem for finding the girth maximum (m, r) BTU. Several open questions for girth maximum (m, r) BTU have been raised.
1907.12182
Zhenlong Li Dr.
Zhenlong Li
Geospatial Big Data Handling with High Performance Computing: Current Approaches and Future Directions
null
null
null
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Geospatial big data plays a major role in the era of big data, as most data today are inherently spatial, collected with ubiquitous location-aware sensors. Efficiently collecting, managing, storing, and analyzing geospatial data streams enables development of new decision-support systems and provides unprecedented opportunities for business, science, and engineering. However, handling the "Vs" (volume, variety, velocity, veracity, and value) of big data is a challenging task. This is especially true for geospatial big data, since the massive datasets must be analyzed in the context of space and time. High performance computing (HPC) provides an essential solution to geospatial big data challenges. This chapter first summarizes four key aspects for handling geospatial big data with HPC and then briefly reviews existing HPC-related platforms and tools for geospatial big data processing. Lastly, future research directions in using HPC for geospatial big data handling are discussed.
[ { "created": "Mon, 29 Jul 2019 02:37:43 GMT", "version": "v1" } ]
2019-07-30
[ [ "Li", "Zhenlong", "" ] ]
Geospatial big data plays a major role in the era of big data, as most data today are inherently spatial, collected with ubiquitous location-aware sensors. Efficiently collecting, managing, storing, and analyzing geospatial data streams enables development of new decision-support systems and provides unprecedented opportunities for business, science, and engineering. However, handling the "Vs" (volume, variety, velocity, veracity, and value) of big data is a challenging task. This is especially true for geospatial big data, since the massive datasets must be analyzed in the context of space and time. High performance computing (HPC) provides an essential solution to geospatial big data challenges. This chapter first summarizes four key aspects for handling geospatial big data with HPC and then briefly reviews existing HPC-related platforms and tools for geospatial big data processing. Lastly, future research directions in using HPC for geospatial big data handling are discussed.
1902.09782
Qingyan Duan
Qingyan Duan and Lei Zhang
BoostGAN for Occlusive Profile Face Frontalization and Recognition
9 pages, 7 figures, 7 tables
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
There are many facts affecting human face recognition, such as pose, occlusion, illumination, age, etc. First and foremost are large pose and occlusion problems, which can even result in more than 10% performance degradation. Pose-invariant feature representation and face frontalization with generative adversarial networks (GAN) have been widely used to solve the pose problem. However, the synthesis and recognition of occlusive but profile faces is still an uninvestigated problem. To address this issue, in this paper, we aim to contribute an effective solution on how to recognize occlusive but profile faces, even with facial keypoint region (e.g. eyes, nose, etc.) corrupted. Specifically, we propose a boosting Generative Adversarial Network (BoostGAN) for de-occlusion, frontalization, and recognition of faces. Upon the assumption that facial occlusion is partial and incomplete, multiple patch occluded images are fed as inputs for knowledge boosting, such as identity and texture information. A new aggregation structure composed of a deep GAN for coarse face synthesis and a shallow boosting net for fine face generation is further designed. Exhaustive experiments demonstrate that the proposed approach not only presents clear perceptual photo-realistic results but also shows state-of-the-art recognition performance for occlusive but profile faces.
[ { "created": "Tue, 26 Feb 2019 07:59:47 GMT", "version": "v1" } ]
2019-02-27
[ [ "Duan", "Qingyan", "" ], [ "Zhang", "Lei", "" ] ]
There are many facts affecting human face recognition, such as pose, occlusion, illumination, age, etc. First and foremost are large pose and occlusion problems, which can even result in more than 10% performance degradation. Pose-invariant feature representation and face frontalization with generative adversarial networks (GAN) have been widely used to solve the pose problem. However, the synthesis and recognition of occlusive but profile faces is still an uninvestigated problem. To address this issue, in this paper, we aim to contribute an effective solution on how to recognize occlusive but profile faces, even with facial keypoint region (e.g. eyes, nose, etc.) corrupted. Specifically, we propose a boosting Generative Adversarial Network (BoostGAN) for de-occlusion, frontalization, and recognition of faces. Upon the assumption that facial occlusion is partial and incomplete, multiple patch occluded images are fed as inputs for knowledge boosting, such as identity and texture information. A new aggregation structure composed of a deep GAN for coarse face synthesis and a shallow boosting net for fine face generation is further designed. Exhaustive experiments demonstrate that the proposed approach not only presents clear perceptual photo-realistic results but also shows state-of-the-art recognition performance for occlusive but profile faces.
2212.05560
Priya Shukla
Ankit Kumar, Priya Shukla, Vandana Kushwaha and G.C. Nandi
Context-aware 6D Pose Estimation of Known Objects using RGB-D data
null
null
null
null
cs.CV cs.RO
http://creativecommons.org/licenses/by/4.0/
6D object pose estimation has been a research topic in the field of computer vision and robotics. Many modern world applications like robot grasping, manipulation, autonomous navigation etc, require the correct pose of objects present in a scene to perform their specific task. It becomes even harder when the objects are placed in a cluttered scene and the level of occlusion is high. Prior works have tried to overcome this problem but could not achieve accuracy that can be considered reliable in real-world applications. In this paper, we present an architecture that, unlike prior work, is context-aware. It utilizes the context information available to us about the objects. Our proposed architecture treats the objects separately according to their types i.e; symmetric and non-symmetric. A deeper estimator and refiner network pair is used for non-symmetric objects as compared to symmetric due to their intrinsic differences. Our experiments show an enhancement in the accuracy of about 3.2% over the LineMOD dataset, which is considered a benchmark for pose estimation in the occluded and cluttered scenes, against the prior state-of-the-art DenseFusion. Our results also show that the inference time we got is sufficient for real-time usage.
[ { "created": "Sun, 11 Dec 2022 18:01:01 GMT", "version": "v1" } ]
2022-12-13
[ [ "Kumar", "Ankit", "" ], [ "Shukla", "Priya", "" ], [ "Kushwaha", "Vandana", "" ], [ "Nandi", "G. C.", "" ] ]
6D object pose estimation has been a research topic in the field of computer vision and robotics. Many modern world applications like robot grasping, manipulation, autonomous navigation etc, require the correct pose of objects present in a scene to perform their specific task. It becomes even harder when the objects are placed in a cluttered scene and the level of occlusion is high. Prior works have tried to overcome this problem but could not achieve accuracy that can be considered reliable in real-world applications. In this paper, we present an architecture that, unlike prior work, is context-aware. It utilizes the context information available to us about the objects. Our proposed architecture treats the objects separately according to their types i.e; symmetric and non-symmetric. A deeper estimator and refiner network pair is used for non-symmetric objects as compared to symmetric due to their intrinsic differences. Our experiments show an enhancement in the accuracy of about 3.2% over the LineMOD dataset, which is considered a benchmark for pose estimation in the occluded and cluttered scenes, against the prior state-of-the-art DenseFusion. Our results also show that the inference time we got is sufficient for real-time usage.
1606.05943
EPTCS
Roly Perera (University of Glasgow), Julien Lange (Imperial College London), Simon J. Gay (University of Glasgow)
Multiparty Compatibility for Concurrent Objects
In Proceedings PLACES 2016, arXiv:1606.05403
EPTCS 211, 2016, pp. 73-82
10.4204/EPTCS.211.8
null
cs.PL cs.DC cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Objects and actors are communicating state machines, offering and consuming different services at different points in their lifecycle. Two complementary challenges arise when programming such systems. When objects interact, their state machines must be "compatible", so that services are requested only when they are available. Dually, when objects refine other objects, their state machines must be "compliant", so that services are honoured whenever they are promised. In this paper we show how the idea of multiparty compatibility from the session types literature can be applied to both of these problems. We present an untyped language in which concurrent objects are checked automatically for compatibility and compliance. For simple objects, checking can be exhaustive and has the feel of a type system. More complex objects can be partially validated via test cases, leading to a methodology closer to continuous testing. Our proof-of-concept implementation is limited in some important respects, but demonstrates the potential value of the approach and the relationship to existing software development practices.
[ { "created": "Mon, 20 Jun 2016 01:09:44 GMT", "version": "v1" } ]
2016-06-21
[ [ "Perera", "Roly", "", "University of Glasgow" ], [ "Lange", "Julien", "", "Imperial College\n London" ], [ "Gay", "Simon J.", "", "University of Glasgow" ] ]
Objects and actors are communicating state machines, offering and consuming different services at different points in their lifecycle. Two complementary challenges arise when programming such systems. When objects interact, their state machines must be "compatible", so that services are requested only when they are available. Dually, when objects refine other objects, their state machines must be "compliant", so that services are honoured whenever they are promised. In this paper we show how the idea of multiparty compatibility from the session types literature can be applied to both of these problems. We present an untyped language in which concurrent objects are checked automatically for compatibility and compliance. For simple objects, checking can be exhaustive and has the feel of a type system. More complex objects can be partially validated via test cases, leading to a methodology closer to continuous testing. Our proof-of-concept implementation is limited in some important respects, but demonstrates the potential value of the approach and the relationship to existing software development practices.
2211.14672
Mohammad Javad Sojdeh
Mohammad Javad Sojdeh, Mehdi Letafati, Seyed Pooya Shariatpanahi, Babak Hossein Khalaj
Multi-Transmitter Coded Caching with Secure Delivery over Linear Networks -- Extended Version
null
null
null
null
cs.IT math.IT
http://creativecommons.org/licenses/by-nc-sa/4.0/
In this paper, we consider multiple cache-enabled end-users connected to multiple transmitters through a linear network. We also prevent a totally passive eavesdropper, who sniffs the packets in the delivery phase, from obtaining any information about the original files in cache-aided networks. Three different secure centralized multi-transmitter coded caching scenarios namely, secure multi-transmitter coded caching, secure multi-transmitter coded caching with reduced subpacketization, and secure multi-transmitter coded caching with reduced feedback, are considered and closed-form coding delay and secret shared key storage expressions are provided. As our security guarantee, we show that the delivery phase does not reveal any information to the eavesdropper using the mutual information metric. Moreover, we investigate the secure decentralized multi-transmitter coded caching scenario, in which there is no cooperation between the clients and transmitters during the cache content placement phase and study its performance compared to the centralized scheme. We analyze the system's performance in terms of Coding Delay and guarantee the security of our presented schemes using the Mutual Information metric. Numerical evaluations verify that security incurs a negligible cost in terms of memory usage when the number of files and users are scaled up, in both centralized and decentralized scenarios. Also, we numerically show that by increasing the number of files and users, the secure coding delay of centralized and decentralized schemes became asymptotically equal.
[ { "created": "Sat, 26 Nov 2022 21:57:45 GMT", "version": "v1" } ]
2022-11-29
[ [ "Sojdeh", "Mohammad Javad", "" ], [ "Letafati", "Mehdi", "" ], [ "Shariatpanahi", "Seyed Pooya", "" ], [ "Khalaj", "Babak Hossein", "" ] ]
In this paper, we consider multiple cache-enabled end-users connected to multiple transmitters through a linear network. We also prevent a totally passive eavesdropper, who sniffs the packets in the delivery phase, from obtaining any information about the original files in cache-aided networks. Three different secure centralized multi-transmitter coded caching scenarios namely, secure multi-transmitter coded caching, secure multi-transmitter coded caching with reduced subpacketization, and secure multi-transmitter coded caching with reduced feedback, are considered and closed-form coding delay and secret shared key storage expressions are provided. As our security guarantee, we show that the delivery phase does not reveal any information to the eavesdropper using the mutual information metric. Moreover, we investigate the secure decentralized multi-transmitter coded caching scenario, in which there is no cooperation between the clients and transmitters during the cache content placement phase and study its performance compared to the centralized scheme. We analyze the system's performance in terms of Coding Delay and guarantee the security of our presented schemes using the Mutual Information metric. Numerical evaluations verify that security incurs a negligible cost in terms of memory usage when the number of files and users are scaled up, in both centralized and decentralized scenarios. Also, we numerically show that by increasing the number of files and users, the secure coding delay of centralized and decentralized schemes became asymptotically equal.
2311.06204
Md. Motahar Mahtab
Md. Motahar Mahtab, Monirul Haque, Mehedi Hasan and Farig Sadeque
BanglaBait: Semi-Supervised Adversarial Approach for Clickbait Detection on Bangla Clickbait Dataset
8 pages, 3 figures, 5 tables, published in Recent Advances in Natural Language Processing 2023
null
10.26615/978-954-452-092-2_081
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
Intentionally luring readers to click on a particular content by exploiting their curiosity defines a title as clickbait. Although several studies focused on detecting clickbait titles in English articles, low resource language like Bangla has not been given adequate attention. To tackle clickbait titles in Bangla, we have constructed the first Bangla clickbait detection dataset containing 15,056 labeled news articles and 65,406 unlabelled news articles extracted from clickbait dense news sites. Each article has been labeled by three expert linguists and includes an article's title, body, and other metadata. By incorporating labeled and unlabelled data, we finetune a pretrained Bangla transformer model in an adversarial fashion using Semi Supervised Generative Adversarial Networks (SS GANs). The proposed model acts as a good baseline for this dataset, outperforming traditional neural network models (LSTM, GRU, CNN) and linguistic feature based models. We expect that this dataset and the detailed analysis and comparison of these clickbait detection models will provide a fundamental basis for future research into detecting clickbait titles in Bengali articles. We have released the corresponding code and dataset.
[ { "created": "Fri, 10 Nov 2023 17:38:46 GMT", "version": "v1" } ]
2023-11-13
[ [ "Mahtab", "Md. Motahar", "" ], [ "Haque", "Monirul", "" ], [ "Hasan", "Mehedi", "" ], [ "Sadeque", "Farig", "" ] ]
Intentionally luring readers to click on a particular content by exploiting their curiosity defines a title as clickbait. Although several studies focused on detecting clickbait titles in English articles, low resource language like Bangla has not been given adequate attention. To tackle clickbait titles in Bangla, we have constructed the first Bangla clickbait detection dataset containing 15,056 labeled news articles and 65,406 unlabelled news articles extracted from clickbait dense news sites. Each article has been labeled by three expert linguists and includes an article's title, body, and other metadata. By incorporating labeled and unlabelled data, we finetune a pretrained Bangla transformer model in an adversarial fashion using Semi Supervised Generative Adversarial Networks (SS GANs). The proposed model acts as a good baseline for this dataset, outperforming traditional neural network models (LSTM, GRU, CNN) and linguistic feature based models. We expect that this dataset and the detailed analysis and comparison of these clickbait detection models will provide a fundamental basis for future research into detecting clickbait titles in Bengali articles. We have released the corresponding code and dataset.
1702.04956
Aaron Gerow
Aaron Gerow, Mingyang Zhou, Stan Matwin, Feng Shi
Reflexive Regular Equivalence for Bipartite Data
A condensed version of this paper will appear in Proceedings of the 30th Canadian Conference on Artificial Intelligence, Edmonton, Alberta, Canada
null
null
null
cs.LG cs.AI stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Bipartite data is common in data engineering and brings unique challenges, particularly when it comes to clustering tasks that impose on strong structural assumptions. This work presents an unsupervised method for assessing similarity in bipartite data. Similar to some co-clustering methods, the method is based on regular equivalence in graphs. The algorithm uses spectral properties of a bipartite adjacency matrix to estimate similarity in both dimensions. The method is reflexive in that similarity in one dimension is used to inform similarity in the other. Reflexive regular equivalence can also use the structure of transitivities -- in a network sense -- the contribution of which is controlled by the algorithm's only free-parameter, $\alpha$. The method is completely unsupervised and can be used to validate assumptions of co-similarity, which are required but often untested, in co-clustering analyses. Three variants of the method with different normalizations are tested on synthetic data. The method is found to be robust to noise and well-suited to asymmetric co-similar structure, making it particularly informative for cluster analysis and recommendation in bipartite data of unknown structure. In experiments, the convergence and speed of the algorithm are found to be stable for different levels of noise. Real-world data from a network of malaria genes are analyzed, where the similarity produced by the reflexive method is shown to out-perform other measures' ability to correctly classify genes.
[ { "created": "Thu, 16 Feb 2017 13:29:30 GMT", "version": "v1" } ]
2017-02-17
[ [ "Gerow", "Aaron", "" ], [ "Zhou", "Mingyang", "" ], [ "Matwin", "Stan", "" ], [ "Shi", "Feng", "" ] ]
Bipartite data is common in data engineering and brings unique challenges, particularly when it comes to clustering tasks that impose on strong structural assumptions. This work presents an unsupervised method for assessing similarity in bipartite data. Similar to some co-clustering methods, the method is based on regular equivalence in graphs. The algorithm uses spectral properties of a bipartite adjacency matrix to estimate similarity in both dimensions. The method is reflexive in that similarity in one dimension is used to inform similarity in the other. Reflexive regular equivalence can also use the structure of transitivities -- in a network sense -- the contribution of which is controlled by the algorithm's only free-parameter, $\alpha$. The method is completely unsupervised and can be used to validate assumptions of co-similarity, which are required but often untested, in co-clustering analyses. Three variants of the method with different normalizations are tested on synthetic data. The method is found to be robust to noise and well-suited to asymmetric co-similar structure, making it particularly informative for cluster analysis and recommendation in bipartite data of unknown structure. In experiments, the convergence and speed of the algorithm are found to be stable for different levels of noise. Real-world data from a network of malaria genes are analyzed, where the similarity produced by the reflexive method is shown to out-perform other measures' ability to correctly classify genes.
cs/0312001
Martin Lisewski
A. M. Lisewski
The concept of strong and weak virtual reality
17 pages; several edits in v2
Minds and Machines, 16 (2), 201-219 (2006)
10.1007/s11023-006-9037-z
null
cs.LO nlin.AO physics.comp-ph
null
We approach the virtual reality phenomenon by studying its relationship to set theory, and we investigate the case where this is done using the wellfoundedness property of sets. Our hypothesis is that non-wellfounded sets (hypersets) give rise to a different quality of virtual reality than do familiar wellfounded sets. We initially provide an alternative approach to virtual reality based on Sommerhoff's idea of first and second order self-awareness; both categories of self-awareness are considered as necessary conditions for consciousness in terms of higher cognitive functions. We then introduce a representation of first and second order self-awareness through sets, and assume that these sets, which we call events, originally form a collection of wellfounded sets. Strong virtual reality characterizes virtual reality environments which have the limited capacity to create only events associated with wellfounded sets. In contrast, the more general concept of weak virtual reality characterizes collections of virtual reality mediated events altogether forming an entirety larger than any collection of wellfounded sets. By giving reference to Aczel's hyperset theory we indicate that this definition is not empty, because hypersets encompass wellfounded sets already. Moreover, we argue that weak virtual reality could be realized in human history through continued progress in computer technology. Finally, we reformulate our characterization into a more general framework, and use Baltag's Structural Theory of Sets (STS) to show that within this general hyperset theory Sommerhoff's first and second order self-awareness as well as both concepts of virtual reality admit a consistent mathematical representation.
[ { "created": "Sat, 29 Nov 2003 14:08:56 GMT", "version": "v1" }, { "created": "Thu, 30 Mar 2006 20:21:55 GMT", "version": "v2" }, { "created": "Thu, 30 Mar 2006 22:38:13 GMT", "version": "v3" } ]
2007-05-23
[ [ "Lisewski", "A. M.", "" ] ]
We approach the virtual reality phenomenon by studying its relationship to set theory, and we investigate the case where this is done using the wellfoundedness property of sets. Our hypothesis is that non-wellfounded sets (hypersets) give rise to a different quality of virtual reality than do familiar wellfounded sets. We initially provide an alternative approach to virtual reality based on Sommerhoff's idea of first and second order self-awareness; both categories of self-awareness are considered as necessary conditions for consciousness in terms of higher cognitive functions. We then introduce a representation of first and second order self-awareness through sets, and assume that these sets, which we call events, originally form a collection of wellfounded sets. Strong virtual reality characterizes virtual reality environments which have the limited capacity to create only events associated with wellfounded sets. In contrast, the more general concept of weak virtual reality characterizes collections of virtual reality mediated events altogether forming an entirety larger than any collection of wellfounded sets. By giving reference to Aczel's hyperset theory we indicate that this definition is not empty, because hypersets encompass wellfounded sets already. Moreover, we argue that weak virtual reality could be realized in human history through continued progress in computer technology. Finally, we reformulate our characterization into a more general framework, and use Baltag's Structural Theory of Sets (STS) to show that within this general hyperset theory Sommerhoff's first and second order self-awareness as well as both concepts of virtual reality admit a consistent mathematical representation.
2205.09121
Mahsa Yousefi
Mahsa Yousefi, Angeles Martinez
On the efficiency of Stochastic Quasi-Newton Methods for Deep Learning
null
null
null
null
cs.LG math.OC
http://creativecommons.org/licenses/by/4.0/
While first-order methods are popular for solving optimization problems that arise in large-scale deep learning problems, they come with some acute deficiencies. To diminish such shortcomings, there has been recent interest in applying second-order methods such as quasi-Newton based methods which construct Hessians approximations using only gradient information. The main focus of our work is to study the behaviour of stochastic quasi-Newton algorithms for training deep neural networks. We have analyzed the performance of two well-known quasi-Newton updates, the limited memory Broyden-Fletcher-Goldfarb-Shanno (BFGS) and the Symmetric Rank One (SR1). This study fills a gap concerning the real performance of both updates and analyzes whether more efficient training is obtained when using the more robust BFGS update or the cheaper SR1 formula which allows for indefinite Hessian approximations and thus can potentially help to better navigate the pathological saddle points present in the non-convex loss functions found in deep learning. We present and discuss the results of an extensive experimental study which includes the effect of batch normalization and network's architecture, the limited memory parameter, the batch size and the type of sampling strategy. we show that stochastic quasi-Newton optimizers are efficient and able to outperform in some instances the well-known first-order Adam optimizer run with the optimal combination of its numerous hyperparameters.
[ { "created": "Wed, 18 May 2022 20:53:58 GMT", "version": "v1" }, { "created": "Wed, 4 Oct 2023 14:44:35 GMT", "version": "v2" } ]
2023-10-05
[ [ "Yousefi", "Mahsa", "" ], [ "Martinez", "Angeles", "" ] ]
While first-order methods are popular for solving optimization problems that arise in large-scale deep learning problems, they come with some acute deficiencies. To diminish such shortcomings, there has been recent interest in applying second-order methods such as quasi-Newton based methods which construct Hessians approximations using only gradient information. The main focus of our work is to study the behaviour of stochastic quasi-Newton algorithms for training deep neural networks. We have analyzed the performance of two well-known quasi-Newton updates, the limited memory Broyden-Fletcher-Goldfarb-Shanno (BFGS) and the Symmetric Rank One (SR1). This study fills a gap concerning the real performance of both updates and analyzes whether more efficient training is obtained when using the more robust BFGS update or the cheaper SR1 formula which allows for indefinite Hessian approximations and thus can potentially help to better navigate the pathological saddle points present in the non-convex loss functions found in deep learning. We present and discuss the results of an extensive experimental study which includes the effect of batch normalization and network's architecture, the limited memory parameter, the batch size and the type of sampling strategy. we show that stochastic quasi-Newton optimizers are efficient and able to outperform in some instances the well-known first-order Adam optimizer run with the optimal combination of its numerous hyperparameters.
2307.07134
Zheng Gong
Qi Liu, Zheng Gong, Zhenya Huang, Chuanren Liu, Hengshu Zhu, Zhi Li, Enhong Chen and Hui Xiong
Multi-Dimensional Ability Diagnosis for Machine Learning Algorithms
null
null
null
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Machine learning algorithms have become ubiquitous in a number of applications (e.g. image classification). However, due to the insufficient measurement of traditional metrics (e.g. the coarse-grained Accuracy of each classifier), substantial gaps are usually observed between the real-world performance of these algorithms and their scores in standardized evaluations. In this paper, inspired by the psychometric theories from human measurement, we propose a task-agnostic evaluation framework Camilla, where a multi-dimensional diagnostic metric Ability is defined for collaboratively measuring the multifaceted strength of each machine learning algorithm. Specifically, given the response logs from different algorithms to data samples, we leverage cognitive diagnosis assumptions and neural networks to learn the complex interactions among algorithms, samples and the skills (explicitly or implicitly pre-defined) of each sample. In this way, both the abilities of each algorithm on multiple skills and some of the sample factors (e.g. sample difficulty) can be simultaneously quantified. We conduct extensive experiments with hundreds of machine learning algorithms on four public datasets, and our experimental results demonstrate that Camilla not only can capture the pros and cons of each algorithm more precisely, but also outperforms state-of-the-art baselines on the metric reliability, rank consistency and rank stability.
[ { "created": "Fri, 14 Jul 2023 03:15:56 GMT", "version": "v1" } ]
2023-07-17
[ [ "Liu", "Qi", "" ], [ "Gong", "Zheng", "" ], [ "Huang", "Zhenya", "" ], [ "Liu", "Chuanren", "" ], [ "Zhu", "Hengshu", "" ], [ "Li", "Zhi", "" ], [ "Chen", "Enhong", "" ], [ "Xiong", "Hui", "" ] ]
Machine learning algorithms have become ubiquitous in a number of applications (e.g. image classification). However, due to the insufficient measurement of traditional metrics (e.g. the coarse-grained Accuracy of each classifier), substantial gaps are usually observed between the real-world performance of these algorithms and their scores in standardized evaluations. In this paper, inspired by the psychometric theories from human measurement, we propose a task-agnostic evaluation framework Camilla, where a multi-dimensional diagnostic metric Ability is defined for collaboratively measuring the multifaceted strength of each machine learning algorithm. Specifically, given the response logs from different algorithms to data samples, we leverage cognitive diagnosis assumptions and neural networks to learn the complex interactions among algorithms, samples and the skills (explicitly or implicitly pre-defined) of each sample. In this way, both the abilities of each algorithm on multiple skills and some of the sample factors (e.g. sample difficulty) can be simultaneously quantified. We conduct extensive experiments with hundreds of machine learning algorithms on four public datasets, and our experimental results demonstrate that Camilla not only can capture the pros and cons of each algorithm more precisely, but also outperforms state-of-the-art baselines on the metric reliability, rank consistency and rank stability.
2306.15898
Marzieh Haghighi
Marzieh Haghighi, Mario C. Cruz, Erin Weisbart, Beth A. Cimini, Avtar Singh, Julia Bauman, Maria E. Lozada, Sanam L. Kavari, James T. Neal, Paul C. Blainey, Anne E. Carpenter and Shantanu Singh
Pseudo-Labeling Enhanced by Privileged Information and Its Application to In Situ Sequencing Images
This paper has been accepted for publication at IJCAI 2023
Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence (IJCAI), Main Track, Pages 4775-4784, 2023
10.24963/ijcai.2023/531
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Various strategies for label-scarce object detection have been explored by the computer vision research community. These strategies mainly rely on assumptions that are specific to natural images and not directly applicable to the biological and biomedical vision domains. For example, most semi-supervised learning strategies rely on a small set of labeled data as a confident source of ground truth. In many biological vision applications, however, the ground truth is unknown and indirect information might be available in the form of noisy estimations or orthogonal evidence. In this work, we frame a crucial problem in spatial transcriptomics - decoding barcodes from In-Situ-Sequencing (ISS) images - as a semi-supervised object detection (SSOD) problem. Our proposed framework incorporates additional available sources of information into a semi-supervised learning framework in the form of privileged information. The privileged information is incorporated into the teacher's pseudo-labeling in a teacher-student self-training iteration. Although the available privileged information could be data domain specific, we have introduced a general strategy of pseudo-labeling enhanced by privileged information (PLePI) and exemplified the concept using ISS images, as well on the COCO benchmark using extra evidence provided by CLIP.
[ { "created": "Wed, 28 Jun 2023 03:44:42 GMT", "version": "v1" } ]
2023-09-25
[ [ "Haghighi", "Marzieh", "" ], [ "Cruz", "Mario C.", "" ], [ "Weisbart", "Erin", "" ], [ "Cimini", "Beth A.", "" ], [ "Singh", "Avtar", "" ], [ "Bauman", "Julia", "" ], [ "Lozada", "Maria E.", "" ], [ "Kavari", "Sanam L.", "" ], [ "Neal", "James T.", "" ], [ "Blainey", "Paul C.", "" ], [ "Carpenter", "Anne E.", "" ], [ "Singh", "Shantanu", "" ] ]
Various strategies for label-scarce object detection have been explored by the computer vision research community. These strategies mainly rely on assumptions that are specific to natural images and not directly applicable to the biological and biomedical vision domains. For example, most semi-supervised learning strategies rely on a small set of labeled data as a confident source of ground truth. In many biological vision applications, however, the ground truth is unknown and indirect information might be available in the form of noisy estimations or orthogonal evidence. In this work, we frame a crucial problem in spatial transcriptomics - decoding barcodes from In-Situ-Sequencing (ISS) images - as a semi-supervised object detection (SSOD) problem. Our proposed framework incorporates additional available sources of information into a semi-supervised learning framework in the form of privileged information. The privileged information is incorporated into the teacher's pseudo-labeling in a teacher-student self-training iteration. Although the available privileged information could be data domain specific, we have introduced a general strategy of pseudo-labeling enhanced by privileged information (PLePI) and exemplified the concept using ISS images, as well on the COCO benchmark using extra evidence provided by CLIP.
1604.01595
Kazuyuki Asada
Kazuyuki Asada and Naoki Kobayashi
On Word and Frontier Languages of Unsafe Higher-Order Grammars
null
null
null
null
cs.FL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Higher-order grammars are extensions of regular and context-free grammars, where non-terminals may take parameters. They have been extensively studied in 1980's, and restudied recently in the context of model checking and program verification. We show that the class of unsafe order-(n+1) word languages coincides with the class of frontier languages of unsafe order-n tree languages. We use intersection types for transforming an order-(n+1) word grammar to a corresponding order-n tree grammar. The result has been proved for safe languages by Damm in 1982, but it has been open for unsafe languages, to our knowledge. Various known results on higher-order grammars can be obtained as almost immediate corollaries of our result.
[ { "created": "Wed, 6 Apr 2016 12:47:52 GMT", "version": "v1" }, { "created": "Mon, 16 May 2016 11:49:15 GMT", "version": "v2" }, { "created": "Fri, 20 May 2016 06:43:01 GMT", "version": "v3" } ]
2016-05-23
[ [ "Asada", "Kazuyuki", "" ], [ "Kobayashi", "Naoki", "" ] ]
Higher-order grammars are extensions of regular and context-free grammars, where non-terminals may take parameters. They have been extensively studied in 1980's, and restudied recently in the context of model checking and program verification. We show that the class of unsafe order-(n+1) word languages coincides with the class of frontier languages of unsafe order-n tree languages. We use intersection types for transforming an order-(n+1) word grammar to a corresponding order-n tree grammar. The result has been proved for safe languages by Damm in 1982, but it has been open for unsafe languages, to our knowledge. Various known results on higher-order grammars can be obtained as almost immediate corollaries of our result.
0805.0120
Stephen Vavasis
Michael Biggs, Ali Ghodsi, Stephen Vavasis
Nonnegative Matrix Factorization via Rank-One Downdate
null
null
null
null
cs.IR cs.NA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Nonnegative matrix factorization (NMF) was popularized as a tool for data mining by Lee and Seung in 1999. NMF attempts to approximate a matrix with nonnegative entries by a product of two low-rank matrices, also with nonnegative entries. We propose an algorithm called rank-one downdate (R1D) for computing a NMF that is partly motivated by singular value decomposition. This algorithm computes the dominant singular values and vectors of adaptively determined submatrices of a matrix. On each iteration, R1D extracts a rank-one submatrix from the dataset according to an objective function. We establish a theoretical result that maximizing this objective function corresponds to correctly classifying articles in a nearly separable corpus. We also provide computational experiments showing the success of this method in identifying features in realistic datasets.
[ { "created": "Thu, 1 May 2008 17:59:44 GMT", "version": "v1" } ]
2008-05-02
[ [ "Biggs", "Michael", "" ], [ "Ghodsi", "Ali", "" ], [ "Vavasis", "Stephen", "" ] ]
Nonnegative matrix factorization (NMF) was popularized as a tool for data mining by Lee and Seung in 1999. NMF attempts to approximate a matrix with nonnegative entries by a product of two low-rank matrices, also with nonnegative entries. We propose an algorithm called rank-one downdate (R1D) for computing a NMF that is partly motivated by singular value decomposition. This algorithm computes the dominant singular values and vectors of adaptively determined submatrices of a matrix. On each iteration, R1D extracts a rank-one submatrix from the dataset according to an objective function. We establish a theoretical result that maximizing this objective function corresponds to correctly classifying articles in a nearly separable corpus. We also provide computational experiments showing the success of this method in identifying features in realistic datasets.
2402.12712
Shitao Tang
Shitao Tang, Jiacheng Chen, Dilin Wang, Chengzhou Tang, Fuyang Zhang, Yuchen Fan, Vikas Chandra, Yasutaka Furukawa, Rakesh Ranjan
MVDiffusion++: A Dense High-resolution Multi-view Diffusion Model for Single or Sparse-view 3D Object Reconstruction
3D generation, project page: https://mvdiffusion-plusplus.github.io/
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
This paper presents a neural architecture MVDiffusion++ for 3D object reconstruction that synthesizes dense and high-resolution views of an object given one or a few images without camera poses. MVDiffusion++ achieves superior flexibility and scalability with two surprisingly simple ideas: 1) A ``pose-free architecture'' where standard self-attention among 2D latent features learns 3D consistency across an arbitrary number of conditional and generation views without explicitly using camera pose information; and 2) A ``view dropout strategy'' that discards a substantial number of output views during training, which reduces the training-time memory footprint and enables dense and high-resolution view synthesis at test time. We use the Objaverse for training and the Google Scanned Objects for evaluation with standard novel view synthesis and 3D reconstruction metrics, where MVDiffusion++ significantly outperforms the current state of the arts. We also demonstrate a text-to-3D application example by combining MVDiffusion++ with a text-to-image generative model. The project page is at https://mvdiffusion-plusplus.github.io.
[ { "created": "Tue, 20 Feb 2024 04:25:57 GMT", "version": "v1" }, { "created": "Mon, 18 Mar 2024 17:58:05 GMT", "version": "v2" }, { "created": "Tue, 30 Apr 2024 04:11:58 GMT", "version": "v3" } ]
2024-05-01
[ [ "Tang", "Shitao", "" ], [ "Chen", "Jiacheng", "" ], [ "Wang", "Dilin", "" ], [ "Tang", "Chengzhou", "" ], [ "Zhang", "Fuyang", "" ], [ "Fan", "Yuchen", "" ], [ "Chandra", "Vikas", "" ], [ "Furukawa", "Yasutaka", "" ], [ "Ranjan", "Rakesh", "" ] ]
This paper presents a neural architecture MVDiffusion++ for 3D object reconstruction that synthesizes dense and high-resolution views of an object given one or a few images without camera poses. MVDiffusion++ achieves superior flexibility and scalability with two surprisingly simple ideas: 1) A ``pose-free architecture'' where standard self-attention among 2D latent features learns 3D consistency across an arbitrary number of conditional and generation views without explicitly using camera pose information; and 2) A ``view dropout strategy'' that discards a substantial number of output views during training, which reduces the training-time memory footprint and enables dense and high-resolution view synthesis at test time. We use the Objaverse for training and the Google Scanned Objects for evaluation with standard novel view synthesis and 3D reconstruction metrics, where MVDiffusion++ significantly outperforms the current state of the arts. We also demonstrate a text-to-3D application example by combining MVDiffusion++ with a text-to-image generative model. The project page is at https://mvdiffusion-plusplus.github.io.
1204.4765
Julius D'souza
Julius D'souza
String Trees
5 pages
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A string-like compact data structure for unlabelled rooted trees is given using 2n bits.
[ { "created": "Sat, 21 Apr 2012 00:36:28 GMT", "version": "v1" } ]
2015-03-20
[ [ "D'souza", "Julius", "" ] ]
A string-like compact data structure for unlabelled rooted trees is given using 2n bits.
1301.3865
Tony S. Jebara
Tony S. Jebara, Tommi S. Jaakkola
Feature Selection and Dualities in Maximum Entropy Discrimination
Appears in Proceedings of the Sixteenth Conference on Uncertainty in Artificial Intelligence (UAI2000)
null
null
UAI-P-2000-PG-291-300
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Incorporating feature selection into a classification or regression method often carries a number of advantages. In this paper we formalize feature selection specifically from a discriminative perspective of improving classification/regression accuracy. The feature selection method is developed as an extension to the recently proposed maximum entropy discrimination (MED) framework. We describe MED as a flexible (Bayesian) regularization approach that subsumes, e.g., support vector classification, regression and exponential family models. For brevity, we restrict ourselves primarily to feature selection in the context of linear classification/regression methods and demonstrate that the proposed approach indeed carries substantial improvements in practice. Moreover, we discuss and develop various extensions of feature selection, including the problem of dealing with example specific but unobserved degrees of freedom -- alignments or invariants.
[ { "created": "Wed, 16 Jan 2013 15:50:50 GMT", "version": "v1" } ]
2013-01-18
[ [ "Jebara", "Tony S.", "" ], [ "Jaakkola", "Tommi S.", "" ] ]
Incorporating feature selection into a classification or regression method often carries a number of advantages. In this paper we formalize feature selection specifically from a discriminative perspective of improving classification/regression accuracy. The feature selection method is developed as an extension to the recently proposed maximum entropy discrimination (MED) framework. We describe MED as a flexible (Bayesian) regularization approach that subsumes, e.g., support vector classification, regression and exponential family models. For brevity, we restrict ourselves primarily to feature selection in the context of linear classification/regression methods and demonstrate that the proposed approach indeed carries substantial improvements in practice. Moreover, we discuss and develop various extensions of feature selection, including the problem of dealing with example specific but unobserved degrees of freedom -- alignments or invariants.
2209.08731
Sandy Irani
Dorit Aharonov and Sandy Irani
Translationally Invariant Constraint Optimization Problems
75 pages, 13 figures
null
null
null
cs.CC quant-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the complexity of classical constraint satisfaction problems on a 2D grid. Specifically, we consider the complexity of function versions of such problems, with the additional restriction that the constraints are translationally invariant, namely, the variables are located at the vertices of a 2D grid and the constraint between every pair of adjacent variables is the same in each dimension. The only input to the problem is thus the size of the grid. This problem is equivalent to one of the most interesting problems in classical physics, namely, computing the lowest energy of a classical system of particles on the grid. We provide a tight characterization of the complexity of this problem, and show that it is complete for the class $FP^{NEXP}$. Gottesman and Irani (FOCS 2009) also studied classical translationally-invariant constraint satisfaction problems; they show that the problem of deciding whether the cost of the optimal solution is below a given threshold is NEXP-complete. Our result is thus a strengthening of their result from the decision version to the function version of the problem. Our result can also be viewed as a generalization to the translationally invariant setting, of Krentel's famous result from 1988, showing that the function version of SAT is complete for the class $FP^{NP}$. An essential ingredient in the proof is a study of the complexity of a gapped variant of the problem. We show that it is NEXP-hard to approximate the cost of the optimal assignment to within an additive error of $\Omega(N^{1/4})$, for an $N \times N$ grid. To the best of our knowledge, no gapped result is known for CSPs on the grid, even in the non-translationally invariant case. As a byproduct of our results, we also show that a decision version of the optimization problem which asks whether the cost of the optimal assignment is odd or even is also complete for $P^{NEXP}$.
[ { "created": "Mon, 19 Sep 2022 03:03:05 GMT", "version": "v1" } ]
2022-09-20
[ [ "Aharonov", "Dorit", "" ], [ "Irani", "Sandy", "" ] ]
We study the complexity of classical constraint satisfaction problems on a 2D grid. Specifically, we consider the complexity of function versions of such problems, with the additional restriction that the constraints are translationally invariant, namely, the variables are located at the vertices of a 2D grid and the constraint between every pair of adjacent variables is the same in each dimension. The only input to the problem is thus the size of the grid. This problem is equivalent to one of the most interesting problems in classical physics, namely, computing the lowest energy of a classical system of particles on the grid. We provide a tight characterization of the complexity of this problem, and show that it is complete for the class $FP^{NEXP}$. Gottesman and Irani (FOCS 2009) also studied classical translationally-invariant constraint satisfaction problems; they show that the problem of deciding whether the cost of the optimal solution is below a given threshold is NEXP-complete. Our result is thus a strengthening of their result from the decision version to the function version of the problem. Our result can also be viewed as a generalization to the translationally invariant setting, of Krentel's famous result from 1988, showing that the function version of SAT is complete for the class $FP^{NP}$. An essential ingredient in the proof is a study of the complexity of a gapped variant of the problem. We show that it is NEXP-hard to approximate the cost of the optimal assignment to within an additive error of $\Omega(N^{1/4})$, for an $N \times N$ grid. To the best of our knowledge, no gapped result is known for CSPs on the grid, even in the non-translationally invariant case. As a byproduct of our results, we also show that a decision version of the optimization problem which asks whether the cost of the optimal assignment is odd or even is also complete for $P^{NEXP}$.