id
stringlengths
9
10
submitter
stringlengths
1
64
authors
stringlengths
4
20.7k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
404
doi
stringlengths
11
153
report-no
stringlengths
2
254
categories
stringlengths
5
98
license
stringclasses
9 values
orig_abstract
stringlengths
14
3.35k
versions
listlengths
1
60
update_date
stringlengths
10
10
authors_parsed
listlengths
1
1.35k
abstract
stringlengths
11
3.34k
2308.00109
Elliot Murphy
Gary Marcus, Evelina Leivada, Elliot Murphy
A Sentence is Worth a Thousand Pictures: Can Large Language Models Understand Human Language?
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by-nc-nd/4.0/
Artificial Intelligence applications show great potential for language-related tasks that rely on next-word prediction. The current generation of large language models have been linked to claims about human-like linguistic performance and their applications are hailed both as a key step towards Artificial General Intelligence and as major advance in understanding the cognitive, and even neural basis of human language. We analyze the contribution of large language models as theoretically informative representations of a target system vs. atheoretical powerful mechanistic tools, and we identify the key abilities that are still missing from the current state of development and exploitation of these models.
[ { "created": "Wed, 26 Jul 2023 18:58:53 GMT", "version": "v1" } ]
2023-08-02
[ [ "Marcus", "Gary", "" ], [ "Leivada", "Evelina", "" ], [ "Murphy", "Elliot", "" ] ]
Artificial Intelligence applications show great potential for language-related tasks that rely on next-word prediction. The current generation of large language models have been linked to claims about human-like linguistic performance and their applications are hailed both as a key step towards Artificial General Intelligence and as major advance in understanding the cognitive, and even neural basis of human language. We analyze the contribution of large language models as theoretically informative representations of a target system vs. atheoretical powerful mechanistic tools, and we identify the key abilities that are still missing from the current state of development and exploitation of these models.
2205.06969
Minfa Wang
Minfa Wang
Mask CycleGAN: Unpaired Multi-modal Domain Translation with Interpretable Latent Variable
null
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
We propose Mask CycleGAN, a novel architecture for unpaired image domain translation built based on CycleGAN, with an aim to address two issues: 1) unimodality in image translation and 2) lack of interpretability of latent variables. Our innovation in the technical approach is comprised of three key components: masking scheme, generator and objective. Experimental results demonstrate that this architecture is capable of bringing variations to generated images in a controllable manner and is reasonably robust to different masks.
[ { "created": "Sat, 14 May 2022 05:05:37 GMT", "version": "v1" } ]
2022-05-17
[ [ "Wang", "Minfa", "" ] ]
We propose Mask CycleGAN, a novel architecture for unpaired image domain translation built based on CycleGAN, with an aim to address two issues: 1) unimodality in image translation and 2) lack of interpretability of latent variables. Our innovation in the technical approach is comprised of three key components: masking scheme, generator and objective. Experimental results demonstrate that this architecture is capable of bringing variations to generated images in a controllable manner and is reasonably robust to different masks.
2005.06602
Martin P\"omsl
Martin P\"omsl (Osnabr\"uck University) and Roman Lyapin (Cogent Labs Inc.)
CIRCE at SemEval-2020 Task 1: Ensembling Context-Free and Context-Dependent Word Representations
Accepted at SemEval-2020 Task 1 @ COLING 2020. Code available at https://github.com/mpoemsl/circe
null
null
null
cs.CL cs.LG
http://creativecommons.org/licenses/by/4.0/
This paper describes the winning contribution to SemEval-2020 Task 1: Unsupervised Lexical Semantic Change Detection (Subtask 2) handed in by team UG Student Intern. We present an ensemble model that makes predictions based on context-free and context-dependent word representations. The key findings are that (1) context-free word representations are a powerful and robust baseline, (2) a sentence classification objective can be used to obtain useful context-dependent word representations, and (3) combining those representations increases performance on some datasets while decreasing performance on others.
[ { "created": "Thu, 30 Apr 2020 13:18:29 GMT", "version": "v1" }, { "created": "Mon, 13 Jul 2020 10:10:05 GMT", "version": "v2" }, { "created": "Tue, 6 Oct 2020 13:50:47 GMT", "version": "v3" } ]
2020-10-07
[ [ "Pömsl", "Martin", "", "Osnabrück University" ], [ "Lyapin", "Roman", "", "Cogent Labs\n Inc." ] ]
This paper describes the winning contribution to SemEval-2020 Task 1: Unsupervised Lexical Semantic Change Detection (Subtask 2) handed in by team UG Student Intern. We present an ensemble model that makes predictions based on context-free and context-dependent word representations. The key findings are that (1) context-free word representations are a powerful and robust baseline, (2) a sentence classification objective can be used to obtain useful context-dependent word representations, and (3) combining those representations increases performance on some datasets while decreasing performance on others.
1708.08750
Allan Melvin Andrew
Allan Melvin Andrew, Ammar Zakaria, Shaharil Mad Saad and Ali Yeon Md Shakaff
Multi-Stage Feature Selection Based Intelligent Classifier for Classification of Incipient Stage Fire in Building
electronic nose; gas sensors; fire detection; feature selection; feature fusion; Artificial intelligence, machine learning, neural networks, remote sensing, decision support
Sensors 2016, 16, 31
10.3390/s16010031
null
cs.CY cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this study, an early fire detection algorithm has been proposed based on low cost array sensing system, utilizing gas sensors, dust particles and ambient sensors such as temperature and humidity sensor. The odor or smell-print emanated from various fire sources and building construction materials at early stage are measured. For this purpose, odor profile data from five common fire sources and three common building construction materials were used to develop the classification model. Normalized feature extractions of the smell print data were performed before subjected to prediction classifier. These features represent the odor signals in the time domain. The obtained features undergo the proposed multi-stage feature selection technique and lastly, further reduced by Principal Component Analysis (PCA), a dimension reduction technique. The hybrid PCA-PNN based approach has been applied on different datasets from in-house developed system and the portable electronic nose unit. Experimental classification results show that the dimension reduction process performed by PCA has improved the classification accuracy and provided high reliability, regardless of ambient temperature and humidity variation, baseline sensor drift, the different gas concentration level and exposure towards different heating temperature range.
[ { "created": "Sat, 12 Aug 2017 09:54:45 GMT", "version": "v1" } ]
2017-08-30
[ [ "Andrew", "Allan Melvin", "" ], [ "Zakaria", "Ammar", "" ], [ "Saad", "Shaharil Mad", "" ], [ "Shakaff", "Ali Yeon Md", "" ] ]
In this study, an early fire detection algorithm has been proposed based on low cost array sensing system, utilizing gas sensors, dust particles and ambient sensors such as temperature and humidity sensor. The odor or smell-print emanated from various fire sources and building construction materials at early stage are measured. For this purpose, odor profile data from five common fire sources and three common building construction materials were used to develop the classification model. Normalized feature extractions of the smell print data were performed before subjected to prediction classifier. These features represent the odor signals in the time domain. The obtained features undergo the proposed multi-stage feature selection technique and lastly, further reduced by Principal Component Analysis (PCA), a dimension reduction technique. The hybrid PCA-PNN based approach has been applied on different datasets from in-house developed system and the portable electronic nose unit. Experimental classification results show that the dimension reduction process performed by PCA has improved the classification accuracy and provided high reliability, regardless of ambient temperature and humidity variation, baseline sensor drift, the different gas concentration level and exposure towards different heating temperature range.
2304.01959
Taehoon Kim
Taehoon Kim, Bohyung Han
Randomized Adversarial Style Perturbations for Domain Generalization
null
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a novel domain generalization technique, referred to as Randomized Adversarial Style Perturbation (RASP), which is motivated by the observation that the characteristics of each domain are captured by the feature statistics corresponding to style. The proposed algorithm perturbs the style of a feature in an adversarial direction towards a randomly selected class, and makes the model learn against being misled by the unexpected styles observed in unseen target domains. While RASP is effective to handle domain shifts, its naive integration into the training procedure might degrade the capability of learning knowledge from source domains because it has no restriction on the perturbations of representations. This challenge is alleviated by Normalized Feature Mixup (NFM), which facilitates the learning of the original features while achieving robustness to perturbed representations via their mixup during training. We evaluate the proposed algorithm via extensive experiments on various benchmarks and show that our approach improves domain generalization performance, especially in large-scale benchmarks.
[ { "created": "Tue, 4 Apr 2023 17:07:06 GMT", "version": "v1" }, { "created": "Tue, 21 Nov 2023 06:04:48 GMT", "version": "v2" } ]
2023-11-23
[ [ "Kim", "Taehoon", "" ], [ "Han", "Bohyung", "" ] ]
We propose a novel domain generalization technique, referred to as Randomized Adversarial Style Perturbation (RASP), which is motivated by the observation that the characteristics of each domain are captured by the feature statistics corresponding to style. The proposed algorithm perturbs the style of a feature in an adversarial direction towards a randomly selected class, and makes the model learn against being misled by the unexpected styles observed in unseen target domains. While RASP is effective to handle domain shifts, its naive integration into the training procedure might degrade the capability of learning knowledge from source domains because it has no restriction on the perturbations of representations. This challenge is alleviated by Normalized Feature Mixup (NFM), which facilitates the learning of the original features while achieving robustness to perturbed representations via their mixup during training. We evaluate the proposed algorithm via extensive experiments on various benchmarks and show that our approach improves domain generalization performance, especially in large-scale benchmarks.
2009.05668
Li Yang
Li Yang, Zhezhi He, Junshan Zhang, Deliang Fan
KSM: Fast Multiple Task Adaption via Kernel-wise Soft Mask Learning
null
null
null
null
cs.CV cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep Neural Networks (DNN) could forget the knowledge about earlier tasks when learning new tasks, and this is known as \textit{catastrophic forgetting}. While recent continual learning methods are capable of alleviating the catastrophic problem on toy-sized datasets, some issues still remain to be tackled when applying them in real-world problems. Recently, the fast mask-based learning method (e.g. piggyback \cite{mallya2018piggyback}) is proposed to address these issues by learning only a binary element-wise mask in a fast manner, while keeping the backbone model fixed. However, the binary mask has limited modeling capacity for new tasks. A more recent work \cite{hung2019compacting} proposes a compress-grow-based method (CPG) to achieve better accuracy for new tasks by partially training backbone model, but with order-higher training cost, which makes it infeasible to be deployed into popular state-of-the-art edge-/mobile-learning. The primary goal of this work is to simultaneously achieve fast and high-accuracy multi task adaption in continual learning setting. Thus motivated, we propose a new training method called \textit{kernel-wise Soft Mask} (KSM), which learns a kernel-wise hybrid binary and real-value soft mask for each task, while using the same backbone model. Such a soft mask can be viewed as a superposition of a binary mask and a properly scaled real-value tensor, which offers a richer representation capability without low-level kernel support to meet the objective of low hardware overhead. We validate KSM on multiple benchmark datasets against recent state-of-the-art methods (e.g. Piggyback, Packnet, CPG, etc.), which shows good improvement in both accuracy and training cost.
[ { "created": "Fri, 11 Sep 2020 21:48:39 GMT", "version": "v1" } ]
2020-09-15
[ [ "Yang", "Li", "" ], [ "He", "Zhezhi", "" ], [ "Zhang", "Junshan", "" ], [ "Fan", "Deliang", "" ] ]
Deep Neural Networks (DNN) could forget the knowledge about earlier tasks when learning new tasks, and this is known as \textit{catastrophic forgetting}. While recent continual learning methods are capable of alleviating the catastrophic problem on toy-sized datasets, some issues still remain to be tackled when applying them in real-world problems. Recently, the fast mask-based learning method (e.g. piggyback \cite{mallya2018piggyback}) is proposed to address these issues by learning only a binary element-wise mask in a fast manner, while keeping the backbone model fixed. However, the binary mask has limited modeling capacity for new tasks. A more recent work \cite{hung2019compacting} proposes a compress-grow-based method (CPG) to achieve better accuracy for new tasks by partially training backbone model, but with order-higher training cost, which makes it infeasible to be deployed into popular state-of-the-art edge-/mobile-learning. The primary goal of this work is to simultaneously achieve fast and high-accuracy multi task adaption in continual learning setting. Thus motivated, we propose a new training method called \textit{kernel-wise Soft Mask} (KSM), which learns a kernel-wise hybrid binary and real-value soft mask for each task, while using the same backbone model. Such a soft mask can be viewed as a superposition of a binary mask and a properly scaled real-value tensor, which offers a richer representation capability without low-level kernel support to meet the objective of low hardware overhead. We validate KSM on multiple benchmark datasets against recent state-of-the-art methods (e.g. Piggyback, Packnet, CPG, etc.), which shows good improvement in both accuracy and training cost.
1606.01588
Mahmoud Ismail
Salman Niazi (1), Mahmoud Ismail (1), Steffen Grohsschmiedt (2), Mikael Ronstr\"om (3), Seif Haridi (1), Jim Dowling (1) ((1) KTH - Royal Institute of Technology, (2) Spotify AB, (3) Oracle)
HopsFS: Scaling Hierarchical File System Metadata Using NewSQL Databases
null
The 15th USENIX Conference on File and Storage Technologies (FAST 17) (2017) 89-104
null
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent improvements in both the performance and scalability of shared-nothing, transactional, in-memory NewSQL databases have reopened the research question of whether distributed metadata for hierarchical file systems can be managed using commodity databases. In this paper, we introduce HopsFS, a next generation distribution of the Hadoop Distributed File System (HDFS) that replaces HDFS' single node in-memory metadata service, with a distributed metadata service built on a NewSQL database. By removing the metadata bottleneck, HopsFS enables an order of magnitude larger and higher throughput clusters compared to HDFS. Metadata capacity has been increased to at least 37 times HDFS' capacity, and in experiments based on a workload trace from Spotify, we show that HopsFS supports 16 to 37 times the throughput of Apache HDFS. HopsFS also has lower latency for many concurrent clients, and no downtime during failover. Finally, as metadata is now stored in a commodity database, it can be safely extended and easily exported to external systems for online analysis and free-text search.
[ { "created": "Mon, 6 Jun 2016 00:10:35 GMT", "version": "v1" }, { "created": "Wed, 22 Feb 2017 14:18:02 GMT", "version": "v2" } ]
2017-02-23
[ [ "Niazi", "Salman", "" ], [ "Ismail", "Mahmoud", "" ], [ "Grohsschmiedt", "Steffen", "" ], [ "Ronström", "Mikael", "" ], [ "Haridi", "Seif", "" ], [ "Dowling", "Jim", "" ] ]
Recent improvements in both the performance and scalability of shared-nothing, transactional, in-memory NewSQL databases have reopened the research question of whether distributed metadata for hierarchical file systems can be managed using commodity databases. In this paper, we introduce HopsFS, a next generation distribution of the Hadoop Distributed File System (HDFS) that replaces HDFS' single node in-memory metadata service, with a distributed metadata service built on a NewSQL database. By removing the metadata bottleneck, HopsFS enables an order of magnitude larger and higher throughput clusters compared to HDFS. Metadata capacity has been increased to at least 37 times HDFS' capacity, and in experiments based on a workload trace from Spotify, we show that HopsFS supports 16 to 37 times the throughput of Apache HDFS. HopsFS also has lower latency for many concurrent clients, and no downtime during failover. Finally, as metadata is now stored in a commodity database, it can be safely extended and easily exported to external systems for online analysis and free-text search.
2303.03340
Eli Whitehouse
Eli Whitehouse
Symbolic Synthesis of Neural Networks
8 pages, 1 figure. Minor formula correction and minor textual revision
null
null
null
cs.NE cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Neural networks adapt very well to distributed and continuous representations, but struggle to generalize from small amounts of data. Symbolic systems commonly achieve data efficient generalization by exploiting modularity to benefit from local and discrete features of a representation. These features allow symbolic programs to be improved one module at a time and to experience combinatorial growth in the values they can successfully process. However, it is difficult to design a component that can be used to form symbolic abstractions and which is adequately overparametrized to learn arbitrary high-dimensional transformations. I present Graph-based Symbolically Synthesized Neural Networks (G-SSNNs), a class of neural modules that operate on representations modified with synthesized symbolic programs to include a fixed set of local and discrete features. I demonstrate that the choice of injected features within a G-SSNN module modulates the data efficiency and generalization of baseline neural models, creating predictable patterns of both heightened and curtailed generalization. By training G-SSNNs, we also derive information about desirable semantics of symbolic programs without manual engineering. This information is compact and amenable to abstraction, but can also be flexibly recontextualized for other high-dimensional settings. In future work, I will investigate data efficient generalization and the transferability of learned symbolic representations in more complex G-SSNN designs based on more complex classes of symbolic programs. Experimental code and data are available at https://github.com/shlomenu/symbolically_synthesized_networks .
[ { "created": "Mon, 6 Mar 2023 18:13:14 GMT", "version": "v1" }, { "created": "Tue, 14 Mar 2023 17:57:32 GMT", "version": "v2" } ]
2023-03-15
[ [ "Whitehouse", "Eli", "" ] ]
Neural networks adapt very well to distributed and continuous representations, but struggle to generalize from small amounts of data. Symbolic systems commonly achieve data efficient generalization by exploiting modularity to benefit from local and discrete features of a representation. These features allow symbolic programs to be improved one module at a time and to experience combinatorial growth in the values they can successfully process. However, it is difficult to design a component that can be used to form symbolic abstractions and which is adequately overparametrized to learn arbitrary high-dimensional transformations. I present Graph-based Symbolically Synthesized Neural Networks (G-SSNNs), a class of neural modules that operate on representations modified with synthesized symbolic programs to include a fixed set of local and discrete features. I demonstrate that the choice of injected features within a G-SSNN module modulates the data efficiency and generalization of baseline neural models, creating predictable patterns of both heightened and curtailed generalization. By training G-SSNNs, we also derive information about desirable semantics of symbolic programs without manual engineering. This information is compact and amenable to abstraction, but can also be flexibly recontextualized for other high-dimensional settings. In future work, I will investigate data efficient generalization and the transferability of learned symbolic representations in more complex G-SSNN designs based on more complex classes of symbolic programs. Experimental code and data are available at https://github.com/shlomenu/symbolically_synthesized_networks .
1705.05267
Ahmed Alaa
Ahmed M. Alaa, Scott Hu, Mihaela van der Schaar
Learning from Clinical Judgments: Semi-Markov-Modulated Marked Hawkes Processes for Risk Prognosis
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Critically ill patients in regular wards are vulnerable to unanticipated adverse events which require prompt transfer to the intensive care unit (ICU). To allow for accurate prognosis of deteriorating patients, we develop a novel continuous-time probabilistic model for a monitored patient's temporal sequence of physiological data. Our model captures "informatively sampled" patient episodes: the clinicians' decisions on when to observe a hospitalized patient's vital signs and lab tests over time are represented by a marked Hawkes process, with intensity parameters that are modulated by the patient's latent clinical states, and with observable physiological data (mark process) modeled as a switching multi-task Gaussian process. In addition, our model captures "informatively censored" patient episodes by representing the patient's latent clinical states as an absorbing semi-Markov jump process. The model parameters are learned from offline patient episodes in the electronic health records via an EM-based algorithm. Experiments conducted on a cohort of patients admitted to a major medical center over a 3-year period show that risk prognosis based on our model significantly outperforms the currently deployed medical risk scores and other baseline machine learning algorithms.
[ { "created": "Mon, 15 May 2017 14:29:51 GMT", "version": "v1" } ]
2017-05-16
[ [ "Alaa", "Ahmed M.", "" ], [ "Hu", "Scott", "" ], [ "van der Schaar", "Mihaela", "" ] ]
Critically ill patients in regular wards are vulnerable to unanticipated adverse events which require prompt transfer to the intensive care unit (ICU). To allow for accurate prognosis of deteriorating patients, we develop a novel continuous-time probabilistic model for a monitored patient's temporal sequence of physiological data. Our model captures "informatively sampled" patient episodes: the clinicians' decisions on when to observe a hospitalized patient's vital signs and lab tests over time are represented by a marked Hawkes process, with intensity parameters that are modulated by the patient's latent clinical states, and with observable physiological data (mark process) modeled as a switching multi-task Gaussian process. In addition, our model captures "informatively censored" patient episodes by representing the patient's latent clinical states as an absorbing semi-Markov jump process. The model parameters are learned from offline patient episodes in the electronic health records via an EM-based algorithm. Experiments conducted on a cohort of patients admitted to a major medical center over a 3-year period show that risk prognosis based on our model significantly outperforms the currently deployed medical risk scores and other baseline machine learning algorithms.
2405.10999
Oliver Kramer
Oliver Kramer
Large Language Models for Tuning Evolution Strategies
null
null
null
null
cs.LG cs.CL cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Large Language Models (LLMs) exhibit world knowledge and inference capabilities, making them powerful tools for various applications. This paper proposes a feedback loop mechanism that leverages these capabilities to tune Evolution Strategies (ES) parameters effectively. The mechanism involves a structured process of providing programming instructions, executing the corresponding code, and conducting thorough analysis. This process is specifically designed for the optimization of ES parameters. The method operates through an iterative cycle, ensuring continuous refinement of the ES parameters. First, LLMs process the instructions to generate or modify the code. The code is then executed, and the results are meticulously logged. Subsequent analysis of these results provides insights that drive further improvements. An experiment on tuning the learning rates of ES using the LLaMA3 model demonstrate the feasibility of this approach. This research illustrates how LLMs can be harnessed to improve ES algorithms' performance and suggests broader applications for similar feedback loop mechanisms in various domains.
[ { "created": "Thu, 16 May 2024 21:14:32 GMT", "version": "v1" } ]
2024-05-21
[ [ "Kramer", "Oliver", "" ] ]
Large Language Models (LLMs) exhibit world knowledge and inference capabilities, making them powerful tools for various applications. This paper proposes a feedback loop mechanism that leverages these capabilities to tune Evolution Strategies (ES) parameters effectively. The mechanism involves a structured process of providing programming instructions, executing the corresponding code, and conducting thorough analysis. This process is specifically designed for the optimization of ES parameters. The method operates through an iterative cycle, ensuring continuous refinement of the ES parameters. First, LLMs process the instructions to generate or modify the code. The code is then executed, and the results are meticulously logged. Subsequent analysis of these results provides insights that drive further improvements. An experiment on tuning the learning rates of ES using the LLaMA3 model demonstrate the feasibility of this approach. This research illustrates how LLMs can be harnessed to improve ES algorithms' performance and suggests broader applications for similar feedback loop mechanisms in various domains.
2212.02112
Meng Lan
Meng Lan, Jing Zhang, Lefei Zhang, Dacheng Tao
Learning to Learn Better for Video Object Segmentation
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recently, the joint learning framework (JOINT) integrates matching based transductive reasoning and online inductive learning to achieve accurate and robust semi-supervised video object segmentation (SVOS). However, using the mask embedding as the label to guide the generation of target features in the two branches may result in inadequate target representation and degrade the performance. Besides, how to reasonably fuse the target features in the two different branches rather than simply adding them together to avoid the adverse effect of one dominant branch has not been investigated. In this paper, we propose a novel framework that emphasizes Learning to Learn Better (LLB) target features for SVOS, termed LLB, where we design the discriminative label generation module (DLGM) and the adaptive fusion module to address these issues. Technically, the DLGM takes the background-filtered frame instead of the target mask as input and adopts a lightweight encoder to generate the target features, which serves as the label of the online few-shot learner and the value of the decoder in the transformer to guide the two branches to learn more discriminative target representation. The adaptive fusion module maintains a learnable gate for each branch, which reweighs the element-wise feature representation and allows an adaptive amount of target information in each branch flowing to the fused target feature, thus preventing one branch from being dominant and making the target feature more robust to distractor. Extensive experiments on public benchmarks show that our proposed LLB method achieves state-of-the-art performance.
[ { "created": "Mon, 5 Dec 2022 09:10:34 GMT", "version": "v1" } ]
2022-12-06
[ [ "Lan", "Meng", "" ], [ "Zhang", "Jing", "" ], [ "Zhang", "Lefei", "" ], [ "Tao", "Dacheng", "" ] ]
Recently, the joint learning framework (JOINT) integrates matching based transductive reasoning and online inductive learning to achieve accurate and robust semi-supervised video object segmentation (SVOS). However, using the mask embedding as the label to guide the generation of target features in the two branches may result in inadequate target representation and degrade the performance. Besides, how to reasonably fuse the target features in the two different branches rather than simply adding them together to avoid the adverse effect of one dominant branch has not been investigated. In this paper, we propose a novel framework that emphasizes Learning to Learn Better (LLB) target features for SVOS, termed LLB, where we design the discriminative label generation module (DLGM) and the adaptive fusion module to address these issues. Technically, the DLGM takes the background-filtered frame instead of the target mask as input and adopts a lightweight encoder to generate the target features, which serves as the label of the online few-shot learner and the value of the decoder in the transformer to guide the two branches to learn more discriminative target representation. The adaptive fusion module maintains a learnable gate for each branch, which reweighs the element-wise feature representation and allows an adaptive amount of target information in each branch flowing to the fused target feature, thus preventing one branch from being dominant and making the target feature more robust to distractor. Extensive experiments on public benchmarks show that our proposed LLB method achieves state-of-the-art performance.
2206.10185
Sajad Khodadadian
Sajad Khodadadian, Pranay Sharma, Gauri Joshi, Siva Theja Maguluri
Federated Reinforcement Learning: Linear Speedup Under Markovian Sampling
69 pages, 1 figure, accepted to ICML 2022 for long presentation
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
Since reinforcement learning algorithms are notoriously data-intensive, the task of sampling observations from the environment is usually split across multiple agents. However, transferring these observations from the agents to a central location can be prohibitively expensive in terms of the communication cost, and it can also compromise the privacy of each agent's local behavior policy. In this paper, we consider a federated reinforcement learning framework where multiple agents collaboratively learn a global model, without sharing their individual data and policies. Each agent maintains a local copy of the model and updates it using locally sampled data. Although having N agents enables the sampling of N times more data, it is not clear if it leads to proportional convergence speedup. We propose federated versions of on-policy TD, off-policy TD and Q-learning, and analyze their convergence. For all these algorithms, to the best of our knowledge, we are the first to consider Markovian noise and multiple local updates, and prove a linear convergence speedup with respect to the number of agents. To obtain these results, we show that federated TD and Q-learning are special cases of a general framework for federated stochastic approximation with Markovian noise, and we leverage this framework to provide a unified convergence analysis that applies to all the algorithms.
[ { "created": "Tue, 21 Jun 2022 08:39:12 GMT", "version": "v1" } ]
2022-06-22
[ [ "Khodadadian", "Sajad", "" ], [ "Sharma", "Pranay", "" ], [ "Joshi", "Gauri", "" ], [ "Maguluri", "Siva Theja", "" ] ]
Since reinforcement learning algorithms are notoriously data-intensive, the task of sampling observations from the environment is usually split across multiple agents. However, transferring these observations from the agents to a central location can be prohibitively expensive in terms of the communication cost, and it can also compromise the privacy of each agent's local behavior policy. In this paper, we consider a federated reinforcement learning framework where multiple agents collaboratively learn a global model, without sharing their individual data and policies. Each agent maintains a local copy of the model and updates it using locally sampled data. Although having N agents enables the sampling of N times more data, it is not clear if it leads to proportional convergence speedup. We propose federated versions of on-policy TD, off-policy TD and Q-learning, and analyze their convergence. For all these algorithms, to the best of our knowledge, we are the first to consider Markovian noise and multiple local updates, and prove a linear convergence speedup with respect to the number of agents. To obtain these results, we show that federated TD and Q-learning are special cases of a general framework for federated stochastic approximation with Markovian noise, and we leverage this framework to provide a unified convergence analysis that applies to all the algorithms.
2103.07205
Elochukwu Ukwandu Dr
Comfort Olebara, Obianuju Ezugwu, Adaora Obayi, Deborah Ebem, Ujunwa Mbgoh, Elochukwu Ukwandu
Determining the Impacts of Social Media on Mood, Time Management and Academic Activities of Students and the Relationship with their Academic Performance
7 pages, conference paper
null
null
null
cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The number of social media sites have increased exponentially with new ones cashing in on the weaknesses of older ones and others going beyond community guidelines by offering uncensored content. The vendors of these platforms in order to have a wider reach do not place restrictions on viewing age, promises young people with fame, and other such attractive offers that make the youths addicted to the site. The possibility of hacking into accounts of users and using same for fraud is another rave among Nigerian youths with desire for quick riches. The crash in prices of data, smart phones, and related digital devices have increased availability and access thereby closing digital divide and widening its adverse effects on the youths morals and academic pursuits. It is important that the Nigerian government understand factors that contribute to the dwindling performance level of students in government owned institutions to put in place policies and infrastructure that would help combat the challenges. This study investigated the effects of social media on students academic activities, mood and time management abilities. The result indicated that association between social media and academic activities is statistically significant. However, a negative association exists between them which implies that the high the level of social media activity, the lower academic activities participation. Similar association was observed on the effects of social media on students time management ability.
[ { "created": "Fri, 12 Mar 2021 10:58:46 GMT", "version": "v1" } ]
2021-03-15
[ [ "Olebara", "Comfort", "" ], [ "Ezugwu", "Obianuju", "" ], [ "Obayi", "Adaora", "" ], [ "Ebem", "Deborah", "" ], [ "Mbgoh", "Ujunwa", "" ], [ "Ukwandu", "Elochukwu", "" ] ]
The number of social media sites have increased exponentially with new ones cashing in on the weaknesses of older ones and others going beyond community guidelines by offering uncensored content. The vendors of these platforms in order to have a wider reach do not place restrictions on viewing age, promises young people with fame, and other such attractive offers that make the youths addicted to the site. The possibility of hacking into accounts of users and using same for fraud is another rave among Nigerian youths with desire for quick riches. The crash in prices of data, smart phones, and related digital devices have increased availability and access thereby closing digital divide and widening its adverse effects on the youths morals and academic pursuits. It is important that the Nigerian government understand factors that contribute to the dwindling performance level of students in government owned institutions to put in place policies and infrastructure that would help combat the challenges. This study investigated the effects of social media on students academic activities, mood and time management abilities. The result indicated that association between social media and academic activities is statistically significant. However, a negative association exists between them which implies that the high the level of social media activity, the lower academic activities participation. Similar association was observed on the effects of social media on students time management ability.
2011.12143
Lukun Zheng
Yuhang Jiang, Lukun Zheng
Deep learning for video game genre classification
21 pages, 6 figures, 3 tables. arXiv admin note: substantial text overlap with arXiv:2011.07658
null
null
null
cs.CV cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Video game genre classification based on its cover and textual description would be utterly beneficial to many modern identification, collocation, and retrieval systems. At the same time, it is also an extremely challenging task due to the following reasons: First, there exists a wide variety of video game genres, many of which are not concretely defined. Second, video game covers vary in many different ways such as colors, styles, textual information, etc, even for games of the same genre. Third, cover designs and textual descriptions may vary due to many external factors such as country, culture, target reader populations, etc. With the growing competitiveness in the video game industry, the cover designers and typographers push the cover designs to its limit in the hope of attracting sales. The computer-based automatic video game genre classification systems become a particularly exciting research topic in recent years. In this paper, we propose a multi-modal deep learning framework to solve this problem. The contribution of this paper is four-fold. First, we compiles a large dataset consisting of 50,000 video games from 21 genres made of cover images, description text, and title text and the genre information. Second, image-based and text-based, state-of-the-art models are evaluated thoroughly for the task of genre classification for video games. Third, we developed an efficient and salable multi-modal framework based on both images and texts. Fourth, a thorough analysis of the experimental results is given and future works to improve the performance is suggested. The results show that the multi-modal framework outperforms the current state-of-the-art image-based or text-based models. Several challenges are outlined for this task. More efforts and resources are needed for this classification task in order to reach a satisfactory level.
[ { "created": "Sat, 21 Nov 2020 22:31:43 GMT", "version": "v1" } ]
2020-11-25
[ [ "Jiang", "Yuhang", "" ], [ "Zheng", "Lukun", "" ] ]
Video game genre classification based on its cover and textual description would be utterly beneficial to many modern identification, collocation, and retrieval systems. At the same time, it is also an extremely challenging task due to the following reasons: First, there exists a wide variety of video game genres, many of which are not concretely defined. Second, video game covers vary in many different ways such as colors, styles, textual information, etc, even for games of the same genre. Third, cover designs and textual descriptions may vary due to many external factors such as country, culture, target reader populations, etc. With the growing competitiveness in the video game industry, the cover designers and typographers push the cover designs to its limit in the hope of attracting sales. The computer-based automatic video game genre classification systems become a particularly exciting research topic in recent years. In this paper, we propose a multi-modal deep learning framework to solve this problem. The contribution of this paper is four-fold. First, we compiles a large dataset consisting of 50,000 video games from 21 genres made of cover images, description text, and title text and the genre information. Second, image-based and text-based, state-of-the-art models are evaluated thoroughly for the task of genre classification for video games. Third, we developed an efficient and salable multi-modal framework based on both images and texts. Fourth, a thorough analysis of the experimental results is given and future works to improve the performance is suggested. The results show that the multi-modal framework outperforms the current state-of-the-art image-based or text-based models. Several challenges are outlined for this task. More efforts and resources are needed for this classification task in order to reach a satisfactory level.
1810.01480
Julia Kreutzer
Julia Kreutzer, Artem Sokolov
Learning to Segment Inputs for NMT Favors Character-Level Processing
Technical report for IWSLT 2018 paper
null
null
null
cs.CL stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Most modern neural machine translation (NMT) systems rely on presegmented inputs. Segmentation granularity importantly determines the input and output sequence lengths, hence the modeling depth, and source and target vocabularies, which in turn determine model size, computational costs of softmax normalization, and handling of out-of-vocabulary words. However, the current practice is to use static, heuristic-based segmentations that are fixed before NMT training. This begs the question whether the chosen segmentation is optimal for the translation task. To overcome suboptimal segmentation choices, we present an algorithm for dynamic segmentation based on the Adaptative Computation Time algorithm (Graves 2016), that is trainable end-to-end and driven by the NMT objective. In an evaluation on four translation tasks we found that, given the freedom to navigate between different segmentation levels, the model prefers to operate on (almost) character level, providing support for purely character-level NMT models from a novel angle.
[ { "created": "Tue, 2 Oct 2018 19:52:38 GMT", "version": "v1" }, { "created": "Wed, 24 Oct 2018 10:21:05 GMT", "version": "v2" }, { "created": "Mon, 5 Nov 2018 09:14:21 GMT", "version": "v3" } ]
2018-11-06
[ [ "Kreutzer", "Julia", "" ], [ "Sokolov", "Artem", "" ] ]
Most modern neural machine translation (NMT) systems rely on presegmented inputs. Segmentation granularity importantly determines the input and output sequence lengths, hence the modeling depth, and source and target vocabularies, which in turn determine model size, computational costs of softmax normalization, and handling of out-of-vocabulary words. However, the current practice is to use static, heuristic-based segmentations that are fixed before NMT training. This begs the question whether the chosen segmentation is optimal for the translation task. To overcome suboptimal segmentation choices, we present an algorithm for dynamic segmentation based on the Adaptative Computation Time algorithm (Graves 2016), that is trainable end-to-end and driven by the NMT objective. In an evaluation on four translation tasks we found that, given the freedom to navigate between different segmentation levels, the model prefers to operate on (almost) character level, providing support for purely character-level NMT models from a novel angle.
2005.02549
Dong Chen
Dong Chen, Hong Yu
Birth-Burst in Evolving Networks
11 pages, 4 figures
null
null
null
cs.SI physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The evolution of complex networks is governed by both growing rules and internal properties. Most evolving network models (e.g. preferential attachment) emphasize on the growing strategy, while neglecting the characteristics of individual nodes. In this study, we analyzed a widely studied network: the evolving protein-protein interaction (PPI) network. We discovered the critical contribution of individual nodes, occurring particularly at their birth. Specifically, a node is born with a fitness value - a measurement of its intrinsic significance. Upon the introduction of a node with a large fitness into the network, a corresponding high birth-degree is determined accordingly, leading to an abrupt increase of connectivity in the network. The degree fraction of these large (hub) nodes does not decay away with the network evolution, while keeping a constant influence over the lifetime. Here we developed the birth-burst model, an adaptation of the fitness model, to simulate degree-burst and phase-transition in the network evolution.
[ { "created": "Wed, 6 May 2020 01:18:14 GMT", "version": "v1" } ]
2020-05-07
[ [ "Chen", "Dong", "" ], [ "Yu", "Hong", "" ] ]
The evolution of complex networks is governed by both growing rules and internal properties. Most evolving network models (e.g. preferential attachment) emphasize on the growing strategy, while neglecting the characteristics of individual nodes. In this study, we analyzed a widely studied network: the evolving protein-protein interaction (PPI) network. We discovered the critical contribution of individual nodes, occurring particularly at their birth. Specifically, a node is born with a fitness value - a measurement of its intrinsic significance. Upon the introduction of a node with a large fitness into the network, a corresponding high birth-degree is determined accordingly, leading to an abrupt increase of connectivity in the network. The degree fraction of these large (hub) nodes does not decay away with the network evolution, while keeping a constant influence over the lifetime. Here we developed the birth-burst model, an adaptation of the fitness model, to simulate degree-burst and phase-transition in the network evolution.
2108.00377
Gil Shapira
Gil Shapira, Noga Levy, Ishay Goldin, Roy J. Jevnisek
Knowing When to Quit: Selective Cascaded Regression with Patch Attention for Real-Time Face Alignment
Accepted to the 29th ACM International Conference on Multimedia (MM 21)
null
10.1145/3474085.3475401
null
cs.CV
http://creativecommons.org/publicdomain/zero/1.0/
Facial landmarks (FLM) estimation is a critical component in many face-related applications. In this work, we aim to optimize for both accuracy and speed and explore the trade-off between them. Our key observation is that not all faces are created equal. Frontal faces with neutral expressions converge faster than faces with extreme poses or expressions. To differentiate among samples, we train our model to predict the regression error after each iteration. If the current iteration is accurate enough, we stop iterating, saving redundant iterations while keeping the accuracy in check. We also observe that as neighboring patches overlap, we can infer all facial landmarks (FLMs) with only a small number of patches without a major accuracy sacrifice. Architecturally, we offer a multi-scale, patch-based, lightweight feature extractor with a fine-grained local patch attention module, which computes a patch weighting according to the information in the patch itself and enhances the expressive power of the patch features. We analyze the patch attention data to infer where the model is attending when regressing facial landmarks and compare it to face attention in humans. Our model runs in real-time on a mobile device GPU, with 95 Mega Multiply-Add (MMA) operations, outperforming all state-of-the-art methods under 1000 MMA, with a normalized mean error of 8.16 on the 300W challenging dataset.
[ { "created": "Sun, 1 Aug 2021 06:51:47 GMT", "version": "v1" }, { "created": "Tue, 3 Aug 2021 07:21:08 GMT", "version": "v2" } ]
2021-08-04
[ [ "Shapira", "Gil", "" ], [ "Levy", "Noga", "" ], [ "Goldin", "Ishay", "" ], [ "Jevnisek", "Roy J.", "" ] ]
Facial landmarks (FLM) estimation is a critical component in many face-related applications. In this work, we aim to optimize for both accuracy and speed and explore the trade-off between them. Our key observation is that not all faces are created equal. Frontal faces with neutral expressions converge faster than faces with extreme poses or expressions. To differentiate among samples, we train our model to predict the regression error after each iteration. If the current iteration is accurate enough, we stop iterating, saving redundant iterations while keeping the accuracy in check. We also observe that as neighboring patches overlap, we can infer all facial landmarks (FLMs) with only a small number of patches without a major accuracy sacrifice. Architecturally, we offer a multi-scale, patch-based, lightweight feature extractor with a fine-grained local patch attention module, which computes a patch weighting according to the information in the patch itself and enhances the expressive power of the patch features. We analyze the patch attention data to infer where the model is attending when regressing facial landmarks and compare it to face attention in humans. Our model runs in real-time on a mobile device GPU, with 95 Mega Multiply-Add (MMA) operations, outperforming all state-of-the-art methods under 1000 MMA, with a normalized mean error of 8.16 on the 300W challenging dataset.
2206.10234
Kostia Chardonnet
Kostia Chardonnet, Marc de Visme, Beno\^it Valiron, Renaud Vilmart
The Many-Worlds Calculus
null
null
null
null
cs.LO quant-ph
http://creativecommons.org/licenses/by/4.0/
We propose a new typed graphical language for quantum computation, based on compact categories with biproducts. Our language generalizes existing approaches such as ZX-calculus and quantum circuits, while offering a natural framework to support quantum control: it natively supports "quantum tests". The language comes equipped with a denotational semantics based on linear applications, and an equational theory. Through the use of normal forms for the diagrams, we prove the language to be universal, and the equational theory to be complete with respect to the semantics.
[ { "created": "Tue, 21 Jun 2022 10:10:26 GMT", "version": "v1" }, { "created": "Wed, 3 Aug 2022 14:44:15 GMT", "version": "v2" }, { "created": "Thu, 30 Nov 2023 11:27:51 GMT", "version": "v3" } ]
2023-12-01
[ [ "Chardonnet", "Kostia", "" ], [ "de Visme", "Marc", "" ], [ "Valiron", "Benoît", "" ], [ "Vilmart", "Renaud", "" ] ]
We propose a new typed graphical language for quantum computation, based on compact categories with biproducts. Our language generalizes existing approaches such as ZX-calculus and quantum circuits, while offering a natural framework to support quantum control: it natively supports "quantum tests". The language comes equipped with a denotational semantics based on linear applications, and an equational theory. Through the use of normal forms for the diagrams, we prove the language to be universal, and the equational theory to be complete with respect to the semantics.
2211.13051
Kevin Frans
Kevin Frans, Phillip Isola
Powderworld: A Platform for Understanding Generalization via Rich Task Distributions
null
null
null
null
cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
One of the grand challenges of reinforcement learning is the ability to generalize to new tasks. However, general agents require a set of rich, diverse tasks to train on. Designing a `foundation environment' for such tasks is tricky -- the ideal environment would support a range of emergent phenomena, an expressive task space, and fast runtime. To take a step towards addressing this research bottleneck, this work presents Powderworld, a lightweight yet expressive simulation environment running directly on the GPU. Within Powderworld, two motivating challenges distributions are presented, one for world-modelling and one for reinforcement learning. Each contains hand-designed test tasks to examine generalization. Experiments indicate that increasing the environment's complexity improves generalization for world models and certain reinforcement learning agents, yet may inhibit learning in high-variance environments. Powderworld aims to support the study of generalization by providing a source of diverse tasks arising from the same core rules.
[ { "created": "Wed, 23 Nov 2022 15:51:44 GMT", "version": "v1" }, { "created": "Mon, 12 Jun 2023 23:43:23 GMT", "version": "v2" }, { "created": "Sun, 15 Oct 2023 21:15:16 GMT", "version": "v3" } ]
2023-10-17
[ [ "Frans", "Kevin", "" ], [ "Isola", "Phillip", "" ] ]
One of the grand challenges of reinforcement learning is the ability to generalize to new tasks. However, general agents require a set of rich, diverse tasks to train on. Designing a `foundation environment' for such tasks is tricky -- the ideal environment would support a range of emergent phenomena, an expressive task space, and fast runtime. To take a step towards addressing this research bottleneck, this work presents Powderworld, a lightweight yet expressive simulation environment running directly on the GPU. Within Powderworld, two motivating challenges distributions are presented, one for world-modelling and one for reinforcement learning. Each contains hand-designed test tasks to examine generalization. Experiments indicate that increasing the environment's complexity improves generalization for world models and certain reinforcement learning agents, yet may inhibit learning in high-variance environments. Powderworld aims to support the study of generalization by providing a source of diverse tasks arising from the same core rules.
2302.08613
Zhijie Qiao
Zhijie Qiao, Helen Loeb, Venkata Gurrla, Matt Lebermann, Johannes Betz, Rahul Mangharam
Drive Right: Promoting Autonomous Vehicle Education Through an Integrated Simulation Platform
null
SAE Int. J. of CAV 5(4):2022
10.4271/12-05-04-0028
null
cs.HC
http://creativecommons.org/licenses/by/4.0/
Autonomous vehicles (AVs) are being rapidly introduced into our lives. However, public misunderstanding and mistrust have become prominent issues hindering the acceptance of these driverless technologies. The primary objective of this study is to evaluate the effectiveness of a driving simulator to help the public gain an understanding of AVs and build trust in them. To achieve this aim, we built an integrated simulation platform, designed various driving scenarios, and recruited 28 participants for the experiment. The study results indicate that a driving simulator effectively decreases the participants' perceived risk of AVs and increases perceived usefulness. The proposed methodologies and findings of this study can be further explored by auto manufacturers and policymakers to provide user-friendly AV design.
[ { "created": "Thu, 16 Feb 2023 22:35:08 GMT", "version": "v1" } ]
2023-02-20
[ [ "Qiao", "Zhijie", "" ], [ "Loeb", "Helen", "" ], [ "Gurrla", "Venkata", "" ], [ "Lebermann", "Matt", "" ], [ "Betz", "Johannes", "" ], [ "Mangharam", "Rahul", "" ] ]
Autonomous vehicles (AVs) are being rapidly introduced into our lives. However, public misunderstanding and mistrust have become prominent issues hindering the acceptance of these driverless technologies. The primary objective of this study is to evaluate the effectiveness of a driving simulator to help the public gain an understanding of AVs and build trust in them. To achieve this aim, we built an integrated simulation platform, designed various driving scenarios, and recruited 28 participants for the experiment. The study results indicate that a driving simulator effectively decreases the participants' perceived risk of AVs and increases perceived usefulness. The proposed methodologies and findings of this study can be further explored by auto manufacturers and policymakers to provide user-friendly AV design.
2407.05459
Junjie Chen
Matteo Castiglioni, Junjie Chen
Hiring for An Uncertain Task: Joint Design of Information and Contracts
contract design, information design
null
null
null
cs.GT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we initiate the computational problem of jointly designing information and contracts. We consider three possible classes of contracts with decreasing flexibility and increasing simplicity: ambiguous contracts, menus of explicit contracts and explicit single contract. Ambiguous contracts allow the principal to conceal the applied payment schemes through a contract that depends on the unknown state of nature, while explicit contracts reveal the contract prior to the agent's decision. Our results show a trade-off between the simplicity of the contracts and the computational complexity of the joint design. Indeed, we show that an approximately-optimal mechanism with ambiguous contracts can be computed in polynomial time. However, they are convoluted mechanisms and not well-suited for some real-world scenarios. Conversely, explicit menus of contracts and single contracts are simpler mechanisms, but they cannot be computed efficiently. In particular, we show that computing the optimal mechanism with explicit menus of contracts and single contracts is APX-Hard. We also characterize the structure of optimal mechanisms. Interestingly, direct mechanisms are optimal for both the most flexible ambiguous contracts and the least flexible explicit single contract, but they are suboptimal for that with menus of contracts. Finally, motivated by our hardness results, we turn our attention to menus of linear contracts and single linear contracts. We show that both the problem of computing the optimal mechanism with an explicit menu of linear contracts and an explicit single linear contract admits an FPTAS.
[ { "created": "Sun, 7 Jul 2024 18:10:00 GMT", "version": "v1" } ]
2024-07-09
[ [ "Castiglioni", "Matteo", "" ], [ "Chen", "Junjie", "" ] ]
In this paper, we initiate the computational problem of jointly designing information and contracts. We consider three possible classes of contracts with decreasing flexibility and increasing simplicity: ambiguous contracts, menus of explicit contracts and explicit single contract. Ambiguous contracts allow the principal to conceal the applied payment schemes through a contract that depends on the unknown state of nature, while explicit contracts reveal the contract prior to the agent's decision. Our results show a trade-off between the simplicity of the contracts and the computational complexity of the joint design. Indeed, we show that an approximately-optimal mechanism with ambiguous contracts can be computed in polynomial time. However, they are convoluted mechanisms and not well-suited for some real-world scenarios. Conversely, explicit menus of contracts and single contracts are simpler mechanisms, but they cannot be computed efficiently. In particular, we show that computing the optimal mechanism with explicit menus of contracts and single contracts is APX-Hard. We also characterize the structure of optimal mechanisms. Interestingly, direct mechanisms are optimal for both the most flexible ambiguous contracts and the least flexible explicit single contract, but they are suboptimal for that with menus of contracts. Finally, motivated by our hardness results, we turn our attention to menus of linear contracts and single linear contracts. We show that both the problem of computing the optimal mechanism with an explicit menu of linear contracts and an explicit single linear contract admits an FPTAS.
1705.07086
Emmanouil Antonios Platanios
Emmanouil A. Platanios, Hoifung Poon, Tom M. Mitchell, Eric Horvitz
Estimating Accuracy from Unlabeled Data: A Probabilistic Logic Approach
null
null
null
null
cs.LG cs.AI stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose an efficient method to estimate the accuracy of classifiers using only unlabeled data. We consider a setting with multiple classification problems where the target classes may be tied together through logical constraints. For example, a set of classes may be mutually exclusive, meaning that a data instance can belong to at most one of them. The proposed method is based on the intuition that: (i) when classifiers agree, they are more likely to be correct, and (ii) when the classifiers make a prediction that violates the constraints, at least one classifier must be making an error. Experiments on four real-world data sets produce accuracy estimates within a few percent of the true accuracy, using solely unlabeled data. Our models also outperform existing state-of-the-art solutions in both estimating accuracies, and combining multiple classifier outputs. The results emphasize the utility of logical constraints in estimating accuracy, thus validating our intuition.
[ { "created": "Fri, 19 May 2017 16:52:52 GMT", "version": "v1" } ]
2017-05-22
[ [ "Platanios", "Emmanouil A.", "" ], [ "Poon", "Hoifung", "" ], [ "Mitchell", "Tom M.", "" ], [ "Horvitz", "Eric", "" ] ]
We propose an efficient method to estimate the accuracy of classifiers using only unlabeled data. We consider a setting with multiple classification problems where the target classes may be tied together through logical constraints. For example, a set of classes may be mutually exclusive, meaning that a data instance can belong to at most one of them. The proposed method is based on the intuition that: (i) when classifiers agree, they are more likely to be correct, and (ii) when the classifiers make a prediction that violates the constraints, at least one classifier must be making an error. Experiments on four real-world data sets produce accuracy estimates within a few percent of the true accuracy, using solely unlabeled data. Our models also outperform existing state-of-the-art solutions in both estimating accuracies, and combining multiple classifier outputs. The results emphasize the utility of logical constraints in estimating accuracy, thus validating our intuition.
1811.11987
Laurent Bou\'e
Laurent Bou\'e
Deep learning for pedestrians: backpropagation in CNNs
null
null
null
null
cs.LG cs.AI cs.CV cs.SC stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The goal of this document is to provide a pedagogical introduction to the main concepts underpinning the training of deep neural networks using gradient descent; a process known as backpropagation. Although we focus on a very influential class of architectures called "convolutional neural networks" (CNNs) the approach is generic and useful to the machine learning community as a whole. Motivated by the observation that derivations of backpropagation are often obscured by clumsy index-heavy narratives that appear somewhat mathemagical, we aim to offer a conceptually clear, vectorized description that articulates well the higher level logic. Following the principle of "writing is nature's way of letting you know how sloppy your thinking is", we try to make the calculations meticulous, self-contained and yet as intuitive as possible. Taking nothing for granted, ample illustrations serve as visual guides and an extensive bibliography is provided for further explorations. (For the sake of clarity, long mathematical derivations and visualizations have been broken up into short "summarized views" and longer "detailed views" encoded into the PDF as optional content groups. Some figures contain animations designed to illustrate important concepts in a more engaging style. For these reasons, we advise to download the document locally and open it using Adobe Acrobat Reader. Other viewers were not tested and may not render the detailed views, animations correctly.)
[ { "created": "Thu, 29 Nov 2018 07:00:09 GMT", "version": "v1" } ]
2018-11-30
[ [ "Boué", "Laurent", "" ] ]
The goal of this document is to provide a pedagogical introduction to the main concepts underpinning the training of deep neural networks using gradient descent; a process known as backpropagation. Although we focus on a very influential class of architectures called "convolutional neural networks" (CNNs) the approach is generic and useful to the machine learning community as a whole. Motivated by the observation that derivations of backpropagation are often obscured by clumsy index-heavy narratives that appear somewhat mathemagical, we aim to offer a conceptually clear, vectorized description that articulates well the higher level logic. Following the principle of "writing is nature's way of letting you know how sloppy your thinking is", we try to make the calculations meticulous, self-contained and yet as intuitive as possible. Taking nothing for granted, ample illustrations serve as visual guides and an extensive bibliography is provided for further explorations. (For the sake of clarity, long mathematical derivations and visualizations have been broken up into short "summarized views" and longer "detailed views" encoded into the PDF as optional content groups. Some figures contain animations designed to illustrate important concepts in a more engaging style. For these reasons, we advise to download the document locally and open it using Adobe Acrobat Reader. Other viewers were not tested and may not render the detailed views, animations correctly.)
2310.15110
Ruoxi Shi
Ruoxi Shi, Hansheng Chen, Zhuoyang Zhang, Minghua Liu, Chao Xu, Xinyue Wei, Linghao Chen, Chong Zeng, Hao Su
Zero123++: a Single Image to Consistent Multi-view Diffusion Base Model
null
null
null
null
cs.CV cs.GR
http://creativecommons.org/licenses/by-nc-sa/4.0/
We report Zero123++, an image-conditioned diffusion model for generating 3D-consistent multi-view images from a single input view. To take full advantage of pretrained 2D generative priors, we develop various conditioning and training schemes to minimize the effort of finetuning from off-the-shelf image diffusion models such as Stable Diffusion. Zero123++ excels in producing high-quality, consistent multi-view images from a single image, overcoming common issues like texture degradation and geometric misalignment. Furthermore, we showcase the feasibility of training a ControlNet on Zero123++ for enhanced control over the generation process. The code is available at https://github.com/SUDO-AI-3D/zero123plus.
[ { "created": "Mon, 23 Oct 2023 17:18:59 GMT", "version": "v1" } ]
2023-10-24
[ [ "Shi", "Ruoxi", "" ], [ "Chen", "Hansheng", "" ], [ "Zhang", "Zhuoyang", "" ], [ "Liu", "Minghua", "" ], [ "Xu", "Chao", "" ], [ "Wei", "Xinyue", "" ], [ "Chen", "Linghao", "" ], [ "Zeng", "Chong", "" ], [ "Su", "Hao", "" ] ]
We report Zero123++, an image-conditioned diffusion model for generating 3D-consistent multi-view images from a single input view. To take full advantage of pretrained 2D generative priors, we develop various conditioning and training schemes to minimize the effort of finetuning from off-the-shelf image diffusion models such as Stable Diffusion. Zero123++ excels in producing high-quality, consistent multi-view images from a single image, overcoming common issues like texture degradation and geometric misalignment. Furthermore, we showcase the feasibility of training a ControlNet on Zero123++ for enhanced control over the generation process. The code is available at https://github.com/SUDO-AI-3D/zero123plus.
2112.02198
Georg Maringer
Georg Maringer and Marvin Xhemrishi and Sven Puchinger and Kathrin Garb and Hedongliang Liu and Thomas Jerkovits and Ludwig K\"urzinger and Matthias Hiller and Antonia Wachter-Zeh
Analysis of Communication Channels Related to Physical Unclonable Functions
null
null
null
null
cs.IT cs.CR math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cryptographic algorithms rely on the secrecy of their corresponding keys. On embedded systems with standard CMOS chips, where secure permanent memory such as flash is not available as a key storage, the secret key can be derived from Physical Unclonable Functions (PUFs) that make use of minuscule manufacturing variations of, for instance, SRAM cells. Since PUFs are affected by environmental changes, the reliable reproduction of the PUF key requires error correction. For silicon PUFs with binary output, errors occur in the form of bitflips within the PUFs response. Modelling the channel as a Binary Symmetric Channel (BSC) with fixed crossover probability $p$ is only a first-order approximation of the real behavior of the PUF response. We propose a more realistic channel model, refered to as the Varying Binary Symmetric Channel (VBSC), which takes into account that the reliability of different PUF response bits may not be equal. We investigate its channel capacity for various scenarios which differ in the channel state information (CSI) present at encoder and decoder. We compare the capacity results for the VBSC for the different CSI cases with reference to the distribution of the bitflip probability according a work by Maes et al.
[ { "created": "Sat, 4 Dec 2021 00:00:06 GMT", "version": "v1" } ]
2021-12-07
[ [ "Maringer", "Georg", "" ], [ "Xhemrishi", "Marvin", "" ], [ "Puchinger", "Sven", "" ], [ "Garb", "Kathrin", "" ], [ "Liu", "Hedongliang", "" ], [ "Jerkovits", "Thomas", "" ], [ "Kürzinger", "Ludwig", "" ], [ "Hiller", "Matthias", "" ], [ "Wachter-Zeh", "Antonia", "" ] ]
Cryptographic algorithms rely on the secrecy of their corresponding keys. On embedded systems with standard CMOS chips, where secure permanent memory such as flash is not available as a key storage, the secret key can be derived from Physical Unclonable Functions (PUFs) that make use of minuscule manufacturing variations of, for instance, SRAM cells. Since PUFs are affected by environmental changes, the reliable reproduction of the PUF key requires error correction. For silicon PUFs with binary output, errors occur in the form of bitflips within the PUFs response. Modelling the channel as a Binary Symmetric Channel (BSC) with fixed crossover probability $p$ is only a first-order approximation of the real behavior of the PUF response. We propose a more realistic channel model, refered to as the Varying Binary Symmetric Channel (VBSC), which takes into account that the reliability of different PUF response bits may not be equal. We investigate its channel capacity for various scenarios which differ in the channel state information (CSI) present at encoder and decoder. We compare the capacity results for the VBSC for the different CSI cases with reference to the distribution of the bitflip probability according a work by Maes et al.
2103.08157
Kwanghee Choi
Kwanghee Choi, Minyoung Choe, Hyelee Lee
Pretraining Neural Architecture Search Controllers with Locality-based Self-Supervised Learning
Accepted to NeurIPS 2020 Workshop: Self-Supervised Learning - Theory and Practice
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Neural architecture search (NAS) has fostered various fields of machine learning. Despite its prominent dedications, many have criticized the intrinsic limitations of high computational cost. We aim to ameliorate this by proposing a pretraining scheme that can be generally applied to controller-based NAS. Our method, locality-based self-supervised classification task, leverages the structural similarity of network architectures to obtain good architecture representations. We incorporate our method into neural architecture optimization (NAO) to analyze the pretrained embeddings and its effectiveness and highlight that adding metric learning loss brings a favorable impact on NAS. Our code is available at \url{https://github.com/Multi-Objective-NAS/self-supervised-nas}.
[ { "created": "Mon, 15 Mar 2021 06:30:36 GMT", "version": "v1" } ]
2021-03-16
[ [ "Choi", "Kwanghee", "" ], [ "Choe", "Minyoung", "" ], [ "Lee", "Hyelee", "" ] ]
Neural architecture search (NAS) has fostered various fields of machine learning. Despite its prominent dedications, many have criticized the intrinsic limitations of high computational cost. We aim to ameliorate this by proposing a pretraining scheme that can be generally applied to controller-based NAS. Our method, locality-based self-supervised classification task, leverages the structural similarity of network architectures to obtain good architecture representations. We incorporate our method into neural architecture optimization (NAO) to analyze the pretrained embeddings and its effectiveness and highlight that adding metric learning loss brings a favorable impact on NAS. Our code is available at \url{https://github.com/Multi-Objective-NAS/self-supervised-nas}.
1907.03285
Konstantin Chukharev
Konstantin Chukharev and Daniil Chivilikhin
fbSAT: Automatic Inference of Minimal Finite-State Models of Function Blocks Using SAT Solver
21 pages (16 paper, 2 refs, 3 appendix); 9 figures; submitted without an appendix to TAP 2020. Keywords: SAT, Finite-state automata, LTL, Model checking, Counterexample-guided inductive synthesis, Function blocks, IEC 61499
null
null
null
cs.FL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Finite-state models are widely used in software engineering, especially in control systems development. Commonly, in control applications such models are developed manually, hence, keeping them up-to-date requires extra effort. To simplify the maintenance process, an automatic approach may be used, allowing to infer models from behavior examples and temporal properties. As an example of a specific control systems development application we focus on inferring finite-state models of function blocks (FBs) defined by the IEC 61499 international standard for distributed automation systems. In this paper we propose a method for FB model inference from behavior examples based on reduction to Boolean satisfiability problem (SAT). Additionally, we take into account linear temporal properties using counterexample-guided synthesis. We also present the developed tool fbSAT which implements the proposed method, and evaluate it in two case studies: inference of a finite-state model of a Pick-and-Place manipulator, and reconstruction of randomly generated automata. In contrast to existing approaches, the suggested method is more efficient and produces finite-state models minimal both in terms of number of states and guard conditions complexity.
[ { "created": "Sun, 7 Jul 2019 13:34:31 GMT", "version": "v1" }, { "created": "Fri, 25 Oct 2019 00:30:40 GMT", "version": "v2" }, { "created": "Tue, 4 Feb 2020 11:22:06 GMT", "version": "v3" } ]
2020-02-05
[ [ "Chukharev", "Konstantin", "" ], [ "Chivilikhin", "Daniil", "" ] ]
Finite-state models are widely used in software engineering, especially in control systems development. Commonly, in control applications such models are developed manually, hence, keeping them up-to-date requires extra effort. To simplify the maintenance process, an automatic approach may be used, allowing to infer models from behavior examples and temporal properties. As an example of a specific control systems development application we focus on inferring finite-state models of function blocks (FBs) defined by the IEC 61499 international standard for distributed automation systems. In this paper we propose a method for FB model inference from behavior examples based on reduction to Boolean satisfiability problem (SAT). Additionally, we take into account linear temporal properties using counterexample-guided synthesis. We also present the developed tool fbSAT which implements the proposed method, and evaluate it in two case studies: inference of a finite-state model of a Pick-and-Place manipulator, and reconstruction of randomly generated automata. In contrast to existing approaches, the suggested method is more efficient and produces finite-state models minimal both in terms of number of states and guard conditions complexity.
1304.0872
David Doty
David Doty
Timing in chemical reaction networks
null
null
null
null
cs.CC cs.DC cs.DS q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Chemical reaction networks (CRNs) formally model chemistry in a well-mixed solution. CRNs are widely used to describe information processing occurring in natural cellular regulatory networks, and with upcoming advances in synthetic biology, CRNs are a promising programming language for the design of artificial molecular control circuitry. Due to a formal equivalence between CRNs and a model of distributed computing known as population protocols, results transfer readily between the two models. We show that if a CRN respects finite density (at most O(n) additional molecules can be produced from n initial molecules), then starting from any dense initial configuration (all molecular species initially present have initial count Omega(n), where n is the initial molecular count and volume), then every producible species is produced in constant time with high probability. This implies that no CRN obeying the stated constraints can function as a timer, able to produce a molecule, but doing so only after a time that is an unbounded function of the input size. This has consequences regarding an open question of Angluin, Aspnes, and Eisenstat concerning the ability of population protocols to perform fast, reliable leader election and to simulate arbitrary algorithms from a uniform initial state.
[ { "created": "Wed, 3 Apr 2013 08:50:30 GMT", "version": "v1" } ]
2013-04-17
[ [ "Doty", "David", "" ] ]
Chemical reaction networks (CRNs) formally model chemistry in a well-mixed solution. CRNs are widely used to describe information processing occurring in natural cellular regulatory networks, and with upcoming advances in synthetic biology, CRNs are a promising programming language for the design of artificial molecular control circuitry. Due to a formal equivalence between CRNs and a model of distributed computing known as population protocols, results transfer readily between the two models. We show that if a CRN respects finite density (at most O(n) additional molecules can be produced from n initial molecules), then starting from any dense initial configuration (all molecular species initially present have initial count Omega(n), where n is the initial molecular count and volume), then every producible species is produced in constant time with high probability. This implies that no CRN obeying the stated constraints can function as a timer, able to produce a molecule, but doing so only after a time that is an unbounded function of the input size. This has consequences regarding an open question of Angluin, Aspnes, and Eisenstat concerning the ability of population protocols to perform fast, reliable leader election and to simulate arbitrary algorithms from a uniform initial state.
1407.2774
Will Perkins
Vitaly Feldman, Will Perkins, Santosh Vempala
Subsampled Power Iteration: a Unified Algorithm for Block Models and Planted CSP's
null
null
null
null
cs.DS math.CO math.PR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present an algorithm for recovering planted solutions in two well-known models, the stochastic block model and planted constraint satisfaction problems, via a common generalization in terms of random bipartite graphs. Our algorithm matches up to a constant factor the best-known bounds for the number of edges (or constraints) needed for perfect recovery and its running time is linear in the number of edges used. The time complexity is significantly better than both spectral and SDP-based approaches. The main contribution of the algorithm is in the case of unequal sizes in the bipartition (corresponding to odd uniformity in the CSP). Here our algorithm succeeds at a significantly lower density than the spectral approaches, surpassing a barrier based on the spectral norm of a random matrix. Other significant features of the algorithm and analysis include (i) the critical use of power iteration with subsampling, which might be of independent interest; its analysis requires keeping track of multiple norms of an evolving solution (ii) it can be implemented statistically, i.e., with very limited access to the input distribution (iii) the algorithm is extremely simple to implement and runs in linear time, and thus is practical even for very large instances.
[ { "created": "Thu, 10 Jul 2014 13:12:38 GMT", "version": "v1" }, { "created": "Wed, 5 Nov 2014 06:54:15 GMT", "version": "v2" }, { "created": "Tue, 28 Apr 2015 21:01:03 GMT", "version": "v3" } ]
2015-04-30
[ [ "Feldman", "Vitaly", "" ], [ "Perkins", "Will", "" ], [ "Vempala", "Santosh", "" ] ]
We present an algorithm for recovering planted solutions in two well-known models, the stochastic block model and planted constraint satisfaction problems, via a common generalization in terms of random bipartite graphs. Our algorithm matches up to a constant factor the best-known bounds for the number of edges (or constraints) needed for perfect recovery and its running time is linear in the number of edges used. The time complexity is significantly better than both spectral and SDP-based approaches. The main contribution of the algorithm is in the case of unequal sizes in the bipartition (corresponding to odd uniformity in the CSP). Here our algorithm succeeds at a significantly lower density than the spectral approaches, surpassing a barrier based on the spectral norm of a random matrix. Other significant features of the algorithm and analysis include (i) the critical use of power iteration with subsampling, which might be of independent interest; its analysis requires keeping track of multiple norms of an evolving solution (ii) it can be implemented statistically, i.e., with very limited access to the input distribution (iii) the algorithm is extremely simple to implement and runs in linear time, and thus is practical even for very large instances.
1810.03393
Ye Zhu PhD
Ye Zhu, Kai Ming Ting, Yuan Jin, Maia Angelova
Hierarchical clustering that takes advantage of both density-peak and density-connectivity
null
Zhu, Y., Ting, K. M., Jin, Y., & Angelova, M. (2022). Hierarchical clustering that takes advantage of both density-peak and density-connectivity. Information Systems, 103, 101871
10.1016/j.is.2021.101871
null
cs.LG cs.AI stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper focuses on density-based clustering, particularly the Density Peak (DP) algorithm and the one based on density-connectivity DBSCAN; and proposes a new method which takes advantage of the individual strengths of these two methods to yield a density-based hierarchical clustering algorithm. Our investigation begins with formally defining the types of clusters DP and DBSCAN are designed to detect; and then identifies the kinds of distributions that DP and DBSCAN individually fail to detect all clusters in a dataset. These identified weaknesses inspire us to formally define a new kind of clusters and propose a new method called DC-HDP to overcome these weaknesses to identify clusters with arbitrary shapes and varied densities. In addition, the new method produces a richer clustering result in terms of hierarchy or dendrogram for better cluster structures understanding. Our empirical evaluation results show that DC-HDP produces the best clustering results on 14 datasets in comparison with 7 state-of-the-art clustering algorithms.
[ { "created": "Mon, 8 Oct 2018 12:12:42 GMT", "version": "v1" }, { "created": "Mon, 20 Sep 2021 04:08:41 GMT", "version": "v2" } ]
2024-01-30
[ [ "Zhu", "Ye", "" ], [ "Ting", "Kai Ming", "" ], [ "Jin", "Yuan", "" ], [ "Angelova", "Maia", "" ] ]
This paper focuses on density-based clustering, particularly the Density Peak (DP) algorithm and the one based on density-connectivity DBSCAN; and proposes a new method which takes advantage of the individual strengths of these two methods to yield a density-based hierarchical clustering algorithm. Our investigation begins with formally defining the types of clusters DP and DBSCAN are designed to detect; and then identifies the kinds of distributions that DP and DBSCAN individually fail to detect all clusters in a dataset. These identified weaknesses inspire us to formally define a new kind of clusters and propose a new method called DC-HDP to overcome these weaknesses to identify clusters with arbitrary shapes and varied densities. In addition, the new method produces a richer clustering result in terms of hierarchy or dendrogram for better cluster structures understanding. Our empirical evaluation results show that DC-HDP produces the best clustering results on 14 datasets in comparison with 7 state-of-the-art clustering algorithms.
2306.04170
Zhibin Chen
Zhibin Chen, Yansong Feng, Dongyan Zhao
From the One, Judge of the Whole: Typed Entailment Graph Construction with Predicate Generation
9 pages, 3 figures, accepted to ACL 2023
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Entailment Graphs (EGs) have been constructed based on extracted corpora as a strong and explainable form to indicate context-independent entailment relations in natural languages. However, EGs built by previous methods often suffer from the severe sparsity issues, due to limited corpora available and the long-tail phenomenon of predicate distributions. In this paper, we propose a multi-stage method, Typed Predicate-Entailment Graph Generator (TP-EGG), to tackle this problem. Given several seed predicates, TP-EGG builds the graphs by generating new predicates and detecting entailment relations among them. The generative nature of TP-EGG helps us leverage the recent advances from large pretrained language models (PLMs), while avoiding the reliance on carefully prepared corpora. Experiments on benchmark datasets show that TP-EGG can generate high-quality and scale-controllable entailment graphs, achieving significant in-domain improvement over state-of-the-art EGs and boosting the performance of down-stream inference tasks.
[ { "created": "Wed, 7 Jun 2023 05:46:19 GMT", "version": "v1" } ]
2023-06-08
[ [ "Chen", "Zhibin", "" ], [ "Feng", "Yansong", "" ], [ "Zhao", "Dongyan", "" ] ]
Entailment Graphs (EGs) have been constructed based on extracted corpora as a strong and explainable form to indicate context-independent entailment relations in natural languages. However, EGs built by previous methods often suffer from the severe sparsity issues, due to limited corpora available and the long-tail phenomenon of predicate distributions. In this paper, we propose a multi-stage method, Typed Predicate-Entailment Graph Generator (TP-EGG), to tackle this problem. Given several seed predicates, TP-EGG builds the graphs by generating new predicates and detecting entailment relations among them. The generative nature of TP-EGG helps us leverage the recent advances from large pretrained language models (PLMs), while avoiding the reliance on carefully prepared corpora. Experiments on benchmark datasets show that TP-EGG can generate high-quality and scale-controllable entailment graphs, achieving significant in-domain improvement over state-of-the-art EGs and boosting the performance of down-stream inference tasks.
1501.03610
Tom Z. J. Fu
Tom Z. J. Fu, Jianbing Ding, Richard T. B. Ma, Marianne Winslett, Yin Yang, Zhenjie Zhang
DRS: Dynamic Resource Scheduling for Real-Time Analytics over Fast Streams
This is the our latest version with certain modification
null
null
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In a data stream management system (DSMS), users register continuous queries, and receive result updates as data arrive and expire. We focus on applications with real-time constraints, in which the user must receive each result update within a given period after the update occurs. To handle fast data, the DSMS is commonly placed on top of a cloud infrastructure. Because stream properties such as arrival rates can fluctuate unpredictably, cloud resources must be dynamically provisioned and scheduled accordingly to ensure real-time response. It is quite essential, for the existing systems or future developments, to possess the ability of scheduling resources dynamically according to the current workload, in order to avoid wasting resources, or failing in delivering correct results on time. Motivated by this, we propose DRS, a novel dynamic resource scheduler for cloud-based DSMSs. DRS overcomes three fundamental challenges: (a) how to model the relationship between the provisioned resources and query response time (b) where to best place resources; and (c) how to measure system load with minimal overhead. In particular, DRS includes an accurate performance model based on the theory of \emph{Jackson open queueing networks} and is capable of handling \emph{arbitrary} operator topologies, possibly with loops, splits and joins. Extensive experiments with real data confirm that DRS achieves real-time response with close to optimal resource consumption.
[ { "created": "Thu, 15 Jan 2015 09:37:32 GMT", "version": "v1" }, { "created": "Sat, 31 Jan 2015 07:59:22 GMT", "version": "v2" }, { "created": "Thu, 23 Apr 2015 09:48:22 GMT", "version": "v3" } ]
2015-04-24
[ [ "Fu", "Tom Z. J.", "" ], [ "Ding", "Jianbing", "" ], [ "Ma", "Richard T. B.", "" ], [ "Winslett", "Marianne", "" ], [ "Yang", "Yin", "" ], [ "Zhang", "Zhenjie", "" ] ]
In a data stream management system (DSMS), users register continuous queries, and receive result updates as data arrive and expire. We focus on applications with real-time constraints, in which the user must receive each result update within a given period after the update occurs. To handle fast data, the DSMS is commonly placed on top of a cloud infrastructure. Because stream properties such as arrival rates can fluctuate unpredictably, cloud resources must be dynamically provisioned and scheduled accordingly to ensure real-time response. It is quite essential, for the existing systems or future developments, to possess the ability of scheduling resources dynamically according to the current workload, in order to avoid wasting resources, or failing in delivering correct results on time. Motivated by this, we propose DRS, a novel dynamic resource scheduler for cloud-based DSMSs. DRS overcomes three fundamental challenges: (a) how to model the relationship between the provisioned resources and query response time (b) where to best place resources; and (c) how to measure system load with minimal overhead. In particular, DRS includes an accurate performance model based on the theory of \emph{Jackson open queueing networks} and is capable of handling \emph{arbitrary} operator topologies, possibly with loops, splits and joins. Extensive experiments with real data confirm that DRS achieves real-time response with close to optimal resource consumption.
2405.17120
Bogdan Chornomaz
Zachary Chase and Bogdan Chornomaz and Steve Hanneke and Shay Moran and Amir Yehudayoff
Dual VC Dimension Obstructs Sample Compression by Embeddings
null
null
null
null
cs.DM cs.LG
http://creativecommons.org/licenses/by-sa/4.0/
This work studies embedding of arbitrary VC classes in well-behaved VC classes, focusing particularly on extremal classes. Our main result expresses an impossibility: such embeddings necessarily require a significant increase in dimension. In particular, we prove that for every $d$ there is a class with VC dimension $d$ that cannot be embedded in any extremal class of VC dimension smaller than exponential in $d$. In addition to its independent interest, this result has an important implication in learning theory, as it reveals a fundamental limitation of one of the most extensively studied approaches to tackling the long-standing sample compression conjecture. Concretely, the approach proposed by Floyd and Warmuth entails embedding any given VC class into an extremal class of a comparable dimension, and then applying an optimal sample compression scheme for extremal classes. However, our results imply that this strategy would in some cases result in a sample compression scheme at least exponentially larger than what is predicted by the sample compression conjecture. The above implications follow from a general result we prove: any extremal class with VC dimension $d$ has dual VC dimension at most $2d+1$. This bound is exponentially smaller than the classical bound $2^{d+1}-1$ of Assouad, which applies to general concept classes (and is known to be unimprovable for some classes). We in fact prove a stronger result, establishing that $2d+1$ upper bounds the dual Radon number of extremal classes. This theorem represents an abstraction of the classical Radon theorem for convex sets, extending its applicability to a wider combinatorial framework, without relying on the specifics of Euclidean convexity. The proof utilizes the topological method and is primarily based on variants of the Topological Radon Theorem.
[ { "created": "Mon, 27 May 2024 12:38:25 GMT", "version": "v1" } ]
2024-05-28
[ [ "Chase", "Zachary", "" ], [ "Chornomaz", "Bogdan", "" ], [ "Hanneke", "Steve", "" ], [ "Moran", "Shay", "" ], [ "Yehudayoff", "Amir", "" ] ]
This work studies embedding of arbitrary VC classes in well-behaved VC classes, focusing particularly on extremal classes. Our main result expresses an impossibility: such embeddings necessarily require a significant increase in dimension. In particular, we prove that for every $d$ there is a class with VC dimension $d$ that cannot be embedded in any extremal class of VC dimension smaller than exponential in $d$. In addition to its independent interest, this result has an important implication in learning theory, as it reveals a fundamental limitation of one of the most extensively studied approaches to tackling the long-standing sample compression conjecture. Concretely, the approach proposed by Floyd and Warmuth entails embedding any given VC class into an extremal class of a comparable dimension, and then applying an optimal sample compression scheme for extremal classes. However, our results imply that this strategy would in some cases result in a sample compression scheme at least exponentially larger than what is predicted by the sample compression conjecture. The above implications follow from a general result we prove: any extremal class with VC dimension $d$ has dual VC dimension at most $2d+1$. This bound is exponentially smaller than the classical bound $2^{d+1}-1$ of Assouad, which applies to general concept classes (and is known to be unimprovable for some classes). We in fact prove a stronger result, establishing that $2d+1$ upper bounds the dual Radon number of extremal classes. This theorem represents an abstraction of the classical Radon theorem for convex sets, extending its applicability to a wider combinatorial framework, without relying on the specifics of Euclidean convexity. The proof utilizes the topological method and is primarily based on variants of the Topological Radon Theorem.
2205.15731
Udo Schlegel
Udo Schlegel, Samuel Schiegg, Daniel A. Keim
ViNNPruner: Visual Interactive Pruning for Deep Learning
MLVis Short Paper; 4 pages, 1 page references, 3 figures
null
null
null
cs.LG cs.HC
http://creativecommons.org/licenses/by/4.0/
Neural networks grow vastly in size to tackle more sophisticated tasks. In many cases, such large networks are not deployable on particular hardware and need to be reduced in size. Pruning techniques help to shrink deep neural networks to smaller sizes by only decreasing their performance as little as possible. However, such pruning algorithms are often hard to understand by applying them and do not include domain knowledge which can potentially be bad for user goals. We propose ViNNPruner, a visual interactive pruning application that implements state-of-the-art pruning algorithms and the option for users to do manual pruning based on their knowledge. We show how the application facilitates gaining insights into automatic pruning algorithms and semi-automatically pruning oversized networks to make them more efficient using interactive visualizations.
[ { "created": "Tue, 31 May 2022 12:21:38 GMT", "version": "v1" } ]
2022-06-01
[ [ "Schlegel", "Udo", "" ], [ "Schiegg", "Samuel", "" ], [ "Keim", "Daniel A.", "" ] ]
Neural networks grow vastly in size to tackle more sophisticated tasks. In many cases, such large networks are not deployable on particular hardware and need to be reduced in size. Pruning techniques help to shrink deep neural networks to smaller sizes by only decreasing their performance as little as possible. However, such pruning algorithms are often hard to understand by applying them and do not include domain knowledge which can potentially be bad for user goals. We propose ViNNPruner, a visual interactive pruning application that implements state-of-the-art pruning algorithms and the option for users to do manual pruning based on their knowledge. We show how the application facilitates gaining insights into automatic pruning algorithms and semi-automatically pruning oversized networks to make them more efficient using interactive visualizations.
1507.04405
Ryan McCune
Robert Ryan McCune, Tim Weninger, Gregory Madey
Thinking Like a Vertex: a Survey of Vertex-Centric Frameworks for Distributed Graph Processing
null
null
null
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The vertex-centric programming model is an established computational paradigm recently incorporated into distributed processing frameworks to address challenges in large-scale graph processing. Billion-node graphs that exceed the memory capacity of standard machines are not well-supported by popular Big Data tools like MapReduce, which are notoriously poor-performing for iterative graph algorithms such as PageRank. In response, a new type of framework challenges one to Think Like A Vertex (TLAV) and implements user-defined programs from the perspective of a vertex rather than a graph. Such an approach improves locality, demonstrates linear scalability, and provides a natural way to express and compute many iterative graph algorithms. These frameworks are simple to program and widely applicable, but, like an operating system, are composed of several intricate, interdependent components, of which a thorough understanding is necessary in order to elicit top performance at scale. To this end, the first comprehensive survey of TLAV frameworks is presented. In this survey, the vertex-centric approach to graph processing is overviewed, TLAV frameworks are deconstructed into four main components and respectively analyzed, and TLAV implementations are reviewed and categorized.
[ { "created": "Wed, 15 Jul 2015 22:14:23 GMT", "version": "v1" } ]
2015-07-17
[ [ "McCune", "Robert Ryan", "" ], [ "Weninger", "Tim", "" ], [ "Madey", "Gregory", "" ] ]
The vertex-centric programming model is an established computational paradigm recently incorporated into distributed processing frameworks to address challenges in large-scale graph processing. Billion-node graphs that exceed the memory capacity of standard machines are not well-supported by popular Big Data tools like MapReduce, which are notoriously poor-performing for iterative graph algorithms such as PageRank. In response, a new type of framework challenges one to Think Like A Vertex (TLAV) and implements user-defined programs from the perspective of a vertex rather than a graph. Such an approach improves locality, demonstrates linear scalability, and provides a natural way to express and compute many iterative graph algorithms. These frameworks are simple to program and widely applicable, but, like an operating system, are composed of several intricate, interdependent components, of which a thorough understanding is necessary in order to elicit top performance at scale. To this end, the first comprehensive survey of TLAV frameworks is presented. In this survey, the vertex-centric approach to graph processing is overviewed, TLAV frameworks are deconstructed into four main components and respectively analyzed, and TLAV implementations are reviewed and categorized.
2404.13504
Tao Feng
Tao Feng, Lizhen Qu, Zhuang Li, Haolan Zhan, Yuncheng Hua, Gholamreza Haffari
IMO: Greedy Layer-Wise Sparse Representation Learning for Out-of-Distribution Text Classification with Pre-trained Models
null
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Machine learning models have made incredible progress, but they still struggle when applied to examples from unseen domains. This study focuses on a specific problem of domain generalization, where a model is trained on one source domain and tested on multiple target domains that are unseen during training. We propose IMO: Invariant features Masks for Out-of-Distribution text classification, to achieve OOD generalization by learning invariant features. During training, IMO would learn sparse mask layers to remove irrelevant features for prediction, where the remaining features keep invariant. Additionally, IMO has an attention module at the token level to focus on tokens that are useful for prediction. Our comprehensive experiments show that IMO substantially outperforms strong baselines in terms of various evaluation metrics and settings.
[ { "created": "Sun, 21 Apr 2024 02:15:59 GMT", "version": "v1" } ]
2024-04-23
[ [ "Feng", "Tao", "" ], [ "Qu", "Lizhen", "" ], [ "Li", "Zhuang", "" ], [ "Zhan", "Haolan", "" ], [ "Hua", "Yuncheng", "" ], [ "Haffari", "Gholamreza", "" ] ]
Machine learning models have made incredible progress, but they still struggle when applied to examples from unseen domains. This study focuses on a specific problem of domain generalization, where a model is trained on one source domain and tested on multiple target domains that are unseen during training. We propose IMO: Invariant features Masks for Out-of-Distribution text classification, to achieve OOD generalization by learning invariant features. During training, IMO would learn sparse mask layers to remove irrelevant features for prediction, where the remaining features keep invariant. Additionally, IMO has an attention module at the token level to focus on tokens that are useful for prediction. Our comprehensive experiments show that IMO substantially outperforms strong baselines in terms of various evaluation metrics and settings.
2208.01148
Ben London
Ben London, Levi Lu, Ted Sandler, Thorsten Joachims
Boosted Off-Policy Learning
Final version as appeared in AISTATS 2023
null
null
null
cs.LG cs.IR stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose the first boosting algorithm for off-policy learning from logged bandit feedback. Unlike existing boosting methods for supervised learning, our algorithm directly optimizes an estimate of the policy's expected reward. We analyze this algorithm and prove that the excess empirical risk decreases (possibly exponentially fast) with each round of boosting, provided a ''weak'' learning condition is satisfied by the base learner. We further show how to reduce the base learner to supervised learning, which opens up a broad range of readily available base learners with practical benefits, such as decision trees. Experiments indicate that our algorithm inherits many desirable properties of tree-based boosting algorithms (e.g., robustness to feature scaling and hyperparameter tuning), and that it can outperform off-policy learning with deep neural networks as well as methods that simply regress on the observed rewards.
[ { "created": "Mon, 1 Aug 2022 21:43:02 GMT", "version": "v1" }, { "created": "Tue, 2 May 2023 17:30:59 GMT", "version": "v2" } ]
2023-05-03
[ [ "London", "Ben", "" ], [ "Lu", "Levi", "" ], [ "Sandler", "Ted", "" ], [ "Joachims", "Thorsten", "" ] ]
We propose the first boosting algorithm for off-policy learning from logged bandit feedback. Unlike existing boosting methods for supervised learning, our algorithm directly optimizes an estimate of the policy's expected reward. We analyze this algorithm and prove that the excess empirical risk decreases (possibly exponentially fast) with each round of boosting, provided a ''weak'' learning condition is satisfied by the base learner. We further show how to reduce the base learner to supervised learning, which opens up a broad range of readily available base learners with practical benefits, such as decision trees. Experiments indicate that our algorithm inherits many desirable properties of tree-based boosting algorithms (e.g., robustness to feature scaling and hyperparameter tuning), and that it can outperform off-policy learning with deep neural networks as well as methods that simply regress on the observed rewards.
2310.01403
Size Wu
Size Wu and Wenwei Zhang and Lumin Xu and Sheng Jin and Xiangtai Li and Wentao Liu and Chen Change Loy
CLIPSelf: Vision Transformer Distills Itself for Open-Vocabulary Dense Prediction
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
Open-vocabulary dense prediction tasks including object detection and image segmentation have been advanced by the success of Contrastive Language-Image Pre-training (CLIP). CLIP models, particularly those incorporating vision transformers (ViTs), have exhibited remarkable generalization ability in zero-shot image classification. However, when transferring the vision-language alignment of CLIP from global image representation to local region representation for the open-vocabulary dense prediction tasks, CLIP ViTs suffer from the domain shift from full images to local image regions. In this paper, we embark on an in-depth analysis of the region-language alignment in CLIP models, which is essential for downstream open-vocabulary dense prediction tasks. Subsequently, we propose an approach named CLIPSelf, which adapts the image-level recognition ability of CLIP ViT to local image regions without needing any region-text pairs. CLIPSelf empowers ViTs to distill itself by aligning a region representation extracted from its dense feature map with the image-level representation of the corresponding image crop. With the enhanced CLIP ViTs, we achieve new state-of-the-art performance on open-vocabulary object detection, semantic segmentation, and panoptic segmentation across various benchmarks. Models and code are released at https://github.com/wusize/CLIPSelf.
[ { "created": "Mon, 2 Oct 2023 17:58:52 GMT", "version": "v1" }, { "created": "Wed, 24 Jan 2024 18:11:53 GMT", "version": "v2" } ]
2024-01-25
[ [ "Wu", "Size", "" ], [ "Zhang", "Wenwei", "" ], [ "Xu", "Lumin", "" ], [ "Jin", "Sheng", "" ], [ "Li", "Xiangtai", "" ], [ "Liu", "Wentao", "" ], [ "Loy", "Chen Change", "" ] ]
Open-vocabulary dense prediction tasks including object detection and image segmentation have been advanced by the success of Contrastive Language-Image Pre-training (CLIP). CLIP models, particularly those incorporating vision transformers (ViTs), have exhibited remarkable generalization ability in zero-shot image classification. However, when transferring the vision-language alignment of CLIP from global image representation to local region representation for the open-vocabulary dense prediction tasks, CLIP ViTs suffer from the domain shift from full images to local image regions. In this paper, we embark on an in-depth analysis of the region-language alignment in CLIP models, which is essential for downstream open-vocabulary dense prediction tasks. Subsequently, we propose an approach named CLIPSelf, which adapts the image-level recognition ability of CLIP ViT to local image regions without needing any region-text pairs. CLIPSelf empowers ViTs to distill itself by aligning a region representation extracted from its dense feature map with the image-level representation of the corresponding image crop. With the enhanced CLIP ViTs, we achieve new state-of-the-art performance on open-vocabulary object detection, semantic segmentation, and panoptic segmentation across various benchmarks. Models and code are released at https://github.com/wusize/CLIPSelf.
1808.00714
Elena Kirshanova
Elena Kirshanova
Improved Quantum Information Set Decoding
This is a full and corrected version of the paper appeared in PQCrypto2018
null
10.1007/978-3-319-79063-3_24
null
cs.IT math.IT quant-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we present quantum information set decoding (ISD) algorithms for binary linear codes. First, we give an alternative view on the quantum walk based algorithms proposed by Kachigar and Tillich (PQCrypto'17). It is more general and allows to consider any ISD algorithm that has certain properties. The algorithms of May-Meuer-Thomae and Becker-Jeux-May-Meuer satisfy these properties. Second, we translate May-Ozerov Near Neighbour technique (Eurocrypt'15) to an `update-and-query' language more suitable for the quantum walk framework. First, this re-interpretation makes possible to analyse a broader class of algorithms and, second, allows us to combine Near Neighbour search with the quantum walk framework and use both techniques to give a quantum version of Dumer's ISD with Near Neighbour.
[ { "created": "Thu, 2 Aug 2018 09:04:26 GMT", "version": "v1" } ]
2018-08-03
[ [ "Kirshanova", "Elena", "" ] ]
In this paper we present quantum information set decoding (ISD) algorithms for binary linear codes. First, we give an alternative view on the quantum walk based algorithms proposed by Kachigar and Tillich (PQCrypto'17). It is more general and allows to consider any ISD algorithm that has certain properties. The algorithms of May-Meuer-Thomae and Becker-Jeux-May-Meuer satisfy these properties. Second, we translate May-Ozerov Near Neighbour technique (Eurocrypt'15) to an `update-and-query' language more suitable for the quantum walk framework. First, this re-interpretation makes possible to analyse a broader class of algorithms and, second, allows us to combine Near Neighbour search with the quantum walk framework and use both techniques to give a quantum version of Dumer's ISD with Near Neighbour.
2208.11099
Mirko Marras
Andrea Atzori, Gianni Fenu, Mirko Marras
Explaining Bias in Deep Face Recognition via Image Characteristics
Accepted as a full paper at IJCB 2022: 2022 International Joint Conference on Biometrics
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we propose a novel explanatory framework aimed to provide a better understanding of how face recognition models perform as the underlying data characteristics (protected attributes: gender, ethnicity, age; non-protected attributes: facial hair, makeup, accessories, face orientation and occlusion, image distortion, emotions) on which they are tested change. With our framework, we evaluate ten state-of-the-art face recognition models, comparing their fairness in terms of security and usability on two data sets, involving six groups based on gender and ethnicity. We then analyze the impact of image characteristics on models performance. Our results show that trends appearing in a single-attribute analysis disappear or reverse when multi-attribute groups are considered, and that performance disparities are also related to non-protected attributes. Source code: https://cutt.ly/2XwRLiA.
[ { "created": "Tue, 23 Aug 2022 17:18:23 GMT", "version": "v1" } ]
2022-08-24
[ [ "Atzori", "Andrea", "" ], [ "Fenu", "Gianni", "" ], [ "Marras", "Mirko", "" ] ]
In this paper, we propose a novel explanatory framework aimed to provide a better understanding of how face recognition models perform as the underlying data characteristics (protected attributes: gender, ethnicity, age; non-protected attributes: facial hair, makeup, accessories, face orientation and occlusion, image distortion, emotions) on which they are tested change. With our framework, we evaluate ten state-of-the-art face recognition models, comparing their fairness in terms of security and usability on two data sets, involving six groups based on gender and ethnicity. We then analyze the impact of image characteristics on models performance. Our results show that trends appearing in a single-attribute analysis disappear or reverse when multi-attribute groups are considered, and that performance disparities are also related to non-protected attributes. Source code: https://cutt.ly/2XwRLiA.
0901.1906
Jan Cie\'sli\'nski L.
Jan L. Cieslinski, Boguslaw Ratkiewicz
How to improve the accuracy of the discrete gradient method in the one-dimensional case
7 pages plus 7 figures
null
10.1103/PhysRevE.81.016704
null
cs.NA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a new numerical scheme for one dimensional dynamical systems. This is a modification of the discrete gradient method and keeps its advantages, including the stability and the conservation of the energy integral. However, its accuracy is higher by several orders of magnitude.
[ { "created": "Tue, 13 Jan 2009 23:16:12 GMT", "version": "v1" } ]
2015-05-13
[ [ "Cieslinski", "Jan L.", "" ], [ "Ratkiewicz", "Boguslaw", "" ] ]
We present a new numerical scheme for one dimensional dynamical systems. This is a modification of the discrete gradient method and keeps its advantages, including the stability and the conservation of the energy integral. However, its accuracy is higher by several orders of magnitude.
2108.11535
Matheus Pereira
Matheus Barros Pereira, Jefersson Alex dos Santos
ChessMix: Spatial Context Data Augmentation for Remote Sensing Semantic Segmentation
null
null
null
null
cs.CV cs.LG
http://creativecommons.org/licenses/by/4.0/
Labeling semantic segmentation datasets is a costly and laborious process if compared with tasks like image classification and object detection. This is especially true for remote sensing applications that not only work with extremely high spatial resolution data but also commonly require the knowledge of experts of the area to perform the manual labeling. Data augmentation techniques help to improve deep learning models under the circumstance of few and imbalanced labeled samples. In this work, we propose a novel data augmentation method focused on exploring the spatial context of remote sensing semantic segmentation. This method, ChessMix, creates new synthetic images from the existing training set by mixing transformed mini-patches across the dataset in a chessboard-like grid. ChessMix prioritizes patches with more examples of the rarest classes to alleviate the imbalance problems. The results in three diverse well-known remote sensing datasets show that this is a promising approach that helps to improve the networks' performance, working especially well in datasets with few available data. The results also show that ChessMix is capable of improving the segmentation of objects with few labeled pixels when compared to the most common data augmentation methods widely used.
[ { "created": "Thu, 26 Aug 2021 01:01:43 GMT", "version": "v1" } ]
2021-08-27
[ [ "Pereira", "Matheus Barros", "" ], [ "Santos", "Jefersson Alex dos", "" ] ]
Labeling semantic segmentation datasets is a costly and laborious process if compared with tasks like image classification and object detection. This is especially true for remote sensing applications that not only work with extremely high spatial resolution data but also commonly require the knowledge of experts of the area to perform the manual labeling. Data augmentation techniques help to improve deep learning models under the circumstance of few and imbalanced labeled samples. In this work, we propose a novel data augmentation method focused on exploring the spatial context of remote sensing semantic segmentation. This method, ChessMix, creates new synthetic images from the existing training set by mixing transformed mini-patches across the dataset in a chessboard-like grid. ChessMix prioritizes patches with more examples of the rarest classes to alleviate the imbalance problems. The results in three diverse well-known remote sensing datasets show that this is a promising approach that helps to improve the networks' performance, working especially well in datasets with few available data. The results also show that ChessMix is capable of improving the segmentation of objects with few labeled pixels when compared to the most common data augmentation methods widely used.
1604.00942
Emanuel Laci\'c
Emanuel Lacic, Dominik Kowald, Elisabeth Lex
High Enough? Explaining and Predicting Traveler Satisfaction Using Airline Review
5 pages + references, 2 tables, 7 figures
null
null
null
cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Air travel is one of the most frequently used means of transportation in our every-day life. Thus, it is not surprising that an increasing number of travelers share their experiences with airlines and airports in form of online reviews on the Web. In this work, we thrive to explain and uncover the features of airline reviews that contribute most to traveler satisfaction. To that end, we examine reviews crawled from the Skytrax air travel review portal. Skytrax provides four review categories to review airports, lounges, airlines and seats. Each review category consists of several five-star ratings as well as free-text review content. In this paper, we conducted a comprehensive feature study and we find that not only five-star rating information such as airport queuing time and lounge comfort highly correlate with traveler satisfaction but also textual features in the form of the inferred review text sentiment. Based on our findings, we created classifiers to predict traveler satisfaction using the best performing rating features. Our results reveal that given our methodology, traveler satisfaction can be predicted with high accuracy. Additionally, we find that training a model on the sentiment of the review text provides a competitive alternative when no five star rating information is available. We believe that our work is of interest for researchers in the area of modeling and predicting user satisfaction based on available review data on the Web.
[ { "created": "Mon, 4 Apr 2016 16:44:00 GMT", "version": "v1" } ]
2016-04-05
[ [ "Lacic", "Emanuel", "" ], [ "Kowald", "Dominik", "" ], [ "Lex", "Elisabeth", "" ] ]
Air travel is one of the most frequently used means of transportation in our every-day life. Thus, it is not surprising that an increasing number of travelers share their experiences with airlines and airports in form of online reviews on the Web. In this work, we thrive to explain and uncover the features of airline reviews that contribute most to traveler satisfaction. To that end, we examine reviews crawled from the Skytrax air travel review portal. Skytrax provides four review categories to review airports, lounges, airlines and seats. Each review category consists of several five-star ratings as well as free-text review content. In this paper, we conducted a comprehensive feature study and we find that not only five-star rating information such as airport queuing time and lounge comfort highly correlate with traveler satisfaction but also textual features in the form of the inferred review text sentiment. Based on our findings, we created classifiers to predict traveler satisfaction using the best performing rating features. Our results reveal that given our methodology, traveler satisfaction can be predicted with high accuracy. Additionally, we find that training a model on the sentiment of the review text provides a competitive alternative when no five star rating information is available. We believe that our work is of interest for researchers in the area of modeling and predicting user satisfaction based on available review data on the Web.
1612.05832
Leslie Ann Goldberg
Andreas Galanis, Leslie Ann Goldberg, Daniel Stefankovic
Implementations and the independent set polynomial below the Shearer threshold
To appear in TCS
null
null
null
cs.CC cs.DM
http://creativecommons.org/licenses/by/4.0/
The independent set polynomial is important in many areas. For every integer $\Delta\geq 2$, the Shearer threshold is the value $\lambda^*(\Delta)=(\Delta-1)^{\Delta-1}/\Delta^{\Delta}$ . It is known that for $\lambda < - \lambda^*(\Delta)$, there are graphs~$G$ with maximum degree~$\Delta$ whose independent set polynomial, evaluated at~$\lambda$, is at most~$0$. Also, there are no such graphs for any $\lambda > -\lambda^*(\Delta)$. This paper is motivated by the computational problem of approximating the independent set polynomial when $\lambda < - \lambda^*(\Delta)$. The key issue in complexity bounds for this problem is "implementation". Informally, an implementation of a real number $\lambda'$ is a graph whose hard-core partition function, evaluated at~$\lambda$, simulates a vertex-weight of~$\lambda'$ in the sense that $\lambda'$ is the ratio between the contribution to the partition function from independent sets containing a certain vertex and the contribution from independent sets that do not contain that vertex. Implementations are the cornerstone of intractability results for the problem of approximately evaluating the independent set polynomial. Our main result is that, for any $\lambda < - \lambda^*(\Delta)$, it is possible to implement a set of values that is dense over the reals. The result is tight in the sense that it is not possible to implement a set of values that is dense over the reals for any $\lambda> \lambda^*(\Delta)$. Our result has already been used in a paper with \bezakova{} (STOC 2018) to show that it is \#P-hard to approximate the evaluation of the independent set polynomial on graphs of degree at most~$\Delta$ at any value $\lambda<-\lambda^*(\Delta)$. In the appendix, we give an additional incomparable inapproximability result (strengthening the inapproximability bound to an exponential factor, but weakening the hardness to NP-hardness).
[ { "created": "Sat, 17 Dec 2016 22:33:15 GMT", "version": "v1" }, { "created": "Tue, 25 Apr 2017 17:36:52 GMT", "version": "v2" }, { "created": "Wed, 21 Jun 2017 16:22:35 GMT", "version": "v3" }, { "created": "Fri, 10 Jul 2020 08:47:57 GMT", "version": "v4" }, { "created": "Sat, 22 Oct 2022 17:01:38 GMT", "version": "v5" } ]
2022-10-25
[ [ "Galanis", "Andreas", "" ], [ "Goldberg", "Leslie Ann", "" ], [ "Stefankovic", "Daniel", "" ] ]
The independent set polynomial is important in many areas. For every integer $\Delta\geq 2$, the Shearer threshold is the value $\lambda^*(\Delta)=(\Delta-1)^{\Delta-1}/\Delta^{\Delta}$ . It is known that for $\lambda < - \lambda^*(\Delta)$, there are graphs~$G$ with maximum degree~$\Delta$ whose independent set polynomial, evaluated at~$\lambda$, is at most~$0$. Also, there are no such graphs for any $\lambda > -\lambda^*(\Delta)$. This paper is motivated by the computational problem of approximating the independent set polynomial when $\lambda < - \lambda^*(\Delta)$. The key issue in complexity bounds for this problem is "implementation". Informally, an implementation of a real number $\lambda'$ is a graph whose hard-core partition function, evaluated at~$\lambda$, simulates a vertex-weight of~$\lambda'$ in the sense that $\lambda'$ is the ratio between the contribution to the partition function from independent sets containing a certain vertex and the contribution from independent sets that do not contain that vertex. Implementations are the cornerstone of intractability results for the problem of approximately evaluating the independent set polynomial. Our main result is that, for any $\lambda < - \lambda^*(\Delta)$, it is possible to implement a set of values that is dense over the reals. The result is tight in the sense that it is not possible to implement a set of values that is dense over the reals for any $\lambda> \lambda^*(\Delta)$. Our result has already been used in a paper with \bezakova{} (STOC 2018) to show that it is \#P-hard to approximate the evaluation of the independent set polynomial on graphs of degree at most~$\Delta$ at any value $\lambda<-\lambda^*(\Delta)$. In the appendix, we give an additional incomparable inapproximability result (strengthening the inapproximability bound to an exponential factor, but weakening the hardness to NP-hardness).
2403.11150
Jing Zhang
Jing Zhang, Liang Zheng, Meng Wang, and Dan Guo
Training A Small Emotional Vision Language Model for Visual Art Comprehension
16 pages
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper develops small vision language models to understand visual art, which, given an art work, aims to identify its emotion category and explain this prediction with natural language. While small models are computationally efficient, their capacity is much limited compared with large models. To break this trade-off, this paper builds a small emotional vision language model (SEVLM) by emotion modeling and input-output feature alignment. On the one hand, based on valence-arousal-dominance (VAD) knowledge annotated by psychology experts, we introduce and fuse emotional features derived through VAD dictionary and a VAD head to align VAD vectors of predicted emotion explanation and the ground truth. This allows the vision language model to better understand and generate emotional texts, compared with using traditional text embeddings alone. On the other hand, we design a contrastive head to pull close embeddings of the image, its emotion class, and explanation, which aligns model outputs and inputs. On two public affective explanation datasets, we show that the proposed techniques consistently improve the visual art understanding performance of baseline SEVLMs. Importantly, the proposed model can be trained and evaluated on a single RTX 2080 Ti while exhibiting very strong performance: it not only outperforms the state-of-the-art small models but is also competitive compared with LLaVA 7B after fine-tuning and GPT4(V). The code is available at https://github.com/BetterZH/SEVLM-code.
[ { "created": "Sun, 17 Mar 2024 09:01:02 GMT", "version": "v1" }, { "created": "Wed, 10 Jul 2024 13:26:48 GMT", "version": "v2" } ]
2024-07-11
[ [ "Zhang", "Jing", "" ], [ "Zheng", "Liang", "" ], [ "Wang", "Meng", "" ], [ "Guo", "Dan", "" ] ]
This paper develops small vision language models to understand visual art, which, given an art work, aims to identify its emotion category and explain this prediction with natural language. While small models are computationally efficient, their capacity is much limited compared with large models. To break this trade-off, this paper builds a small emotional vision language model (SEVLM) by emotion modeling and input-output feature alignment. On the one hand, based on valence-arousal-dominance (VAD) knowledge annotated by psychology experts, we introduce and fuse emotional features derived through VAD dictionary and a VAD head to align VAD vectors of predicted emotion explanation and the ground truth. This allows the vision language model to better understand and generate emotional texts, compared with using traditional text embeddings alone. On the other hand, we design a contrastive head to pull close embeddings of the image, its emotion class, and explanation, which aligns model outputs and inputs. On two public affective explanation datasets, we show that the proposed techniques consistently improve the visual art understanding performance of baseline SEVLMs. Importantly, the proposed model can be trained and evaluated on a single RTX 2080 Ti while exhibiting very strong performance: it not only outperforms the state-of-the-art small models but is also competitive compared with LLaVA 7B after fine-tuning and GPT4(V). The code is available at https://github.com/BetterZH/SEVLM-code.
1608.03677
Paul Cuff
Paul Cuff and Lanqing Yu
Differential Privacy as a Mutual Information Constraint
ACM CCS 2016, 12 pages
null
10.1145/2976749.2978308
null
cs.IT cs.CR math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Differential privacy is a precise mathematical constraint meant to ensure privacy of individual pieces of information in a database even while queries are being answered about the aggregate. Intuitively, one must come to terms with what differential privacy does and does not guarantee. For example, the definition prevents a strong adversary who knows all but one entry in the database from further inferring about the last one. This strong adversary assumption can be overlooked, resulting in misinterpretation of the privacy guarantee of differential privacy. Herein we give an equivalent definition of privacy using mutual information that makes plain some of the subtleties of differential privacy. The mutual-information differential privacy is in fact sandwiched between $\epsilon$-differential privacy and $(\epsilon,\delta)$-differential privacy in terms of its strength. In contrast to previous works using unconditional mutual information, differential privacy is fundamentally related to conditional mutual information, accompanied by a maximization over the database distribution. The conceptual advantage of using mutual information, aside from yielding a simpler and more intuitive definition of differential privacy, is that its properties are well understood. Several properties of differential privacy are easily verified for the mutual information alternative, such as composition theorems.
[ { "created": "Fri, 12 Aug 2016 05:03:14 GMT", "version": "v1" } ]
2016-08-15
[ [ "Cuff", "Paul", "" ], [ "Yu", "Lanqing", "" ] ]
Differential privacy is a precise mathematical constraint meant to ensure privacy of individual pieces of information in a database even while queries are being answered about the aggregate. Intuitively, one must come to terms with what differential privacy does and does not guarantee. For example, the definition prevents a strong adversary who knows all but one entry in the database from further inferring about the last one. This strong adversary assumption can be overlooked, resulting in misinterpretation of the privacy guarantee of differential privacy. Herein we give an equivalent definition of privacy using mutual information that makes plain some of the subtleties of differential privacy. The mutual-information differential privacy is in fact sandwiched between $\epsilon$-differential privacy and $(\epsilon,\delta)$-differential privacy in terms of its strength. In contrast to previous works using unconditional mutual information, differential privacy is fundamentally related to conditional mutual information, accompanied by a maximization over the database distribution. The conceptual advantage of using mutual information, aside from yielding a simpler and more intuitive definition of differential privacy, is that its properties are well understood. Several properties of differential privacy are easily verified for the mutual information alternative, such as composition theorems.
2302.02931
Amrith Setlur
Amrith Setlur, Don Dennis, Benjamin Eysenbach, Aditi Raghunathan, Chelsea Finn, Virginia Smith, Sergey Levine
Bitrate-Constrained DRO: Beyond Worst Case Robustness To Unknown Group Shifts
null
ICLR 2023
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
Training machine learning models robust to distribution shifts is critical for real-world applications. Some robust training algorithms (e.g., Group DRO) specialize to group shifts and require group information on all training points. Other methods (e.g., CVaR DRO) that do not need group annotations can be overly conservative, since they naively upweight high loss points which may form a contrived set that does not correspond to any meaningful group in the real world (e.g., when the high loss points are randomly mislabeled training points). In this work, we address limitations in prior approaches by assuming a more nuanced form of group shift: conditioned on the label, we assume that the true group function (indicator over group) is simple. For example, we may expect that group shifts occur along low bitrate features (e.g., image background, lighting). Thus, we aim to learn a model that maintains high accuracy on simple group functions realized by these low bitrate features, that need not spend valuable model capacity achieving high accuracy on contrived groups of examples. Based on this, we consider the two-player game formulation of DRO where the adversary's capacity is bitrate-constrained. Our resulting practical algorithm, Bitrate-Constrained DRO (BR-DRO), does not require group information on training samples yet matches the performance of Group DRO on datasets that have training group annotations and that of CVaR DRO on long-tailed distributions. Our theoretical analysis reveals that in some settings BR-DRO objective can provably yield statistically efficient and less conservative solutions than unconstrained CVaR DRO.
[ { "created": "Mon, 6 Feb 2023 17:07:16 GMT", "version": "v1" }, { "created": "Thu, 12 Oct 2023 03:47:00 GMT", "version": "v2" } ]
2023-10-13
[ [ "Setlur", "Amrith", "" ], [ "Dennis", "Don", "" ], [ "Eysenbach", "Benjamin", "" ], [ "Raghunathan", "Aditi", "" ], [ "Finn", "Chelsea", "" ], [ "Smith", "Virginia", "" ], [ "Levine", "Sergey", "" ] ]
Training machine learning models robust to distribution shifts is critical for real-world applications. Some robust training algorithms (e.g., Group DRO) specialize to group shifts and require group information on all training points. Other methods (e.g., CVaR DRO) that do not need group annotations can be overly conservative, since they naively upweight high loss points which may form a contrived set that does not correspond to any meaningful group in the real world (e.g., when the high loss points are randomly mislabeled training points). In this work, we address limitations in prior approaches by assuming a more nuanced form of group shift: conditioned on the label, we assume that the true group function (indicator over group) is simple. For example, we may expect that group shifts occur along low bitrate features (e.g., image background, lighting). Thus, we aim to learn a model that maintains high accuracy on simple group functions realized by these low bitrate features, that need not spend valuable model capacity achieving high accuracy on contrived groups of examples. Based on this, we consider the two-player game formulation of DRO where the adversary's capacity is bitrate-constrained. Our resulting practical algorithm, Bitrate-Constrained DRO (BR-DRO), does not require group information on training samples yet matches the performance of Group DRO on datasets that have training group annotations and that of CVaR DRO on long-tailed distributions. Our theoretical analysis reveals that in some settings BR-DRO objective can provably yield statistically efficient and less conservative solutions than unconstrained CVaR DRO.
2103.04428
Christoph Capellaro
Christoph Capellaro
Design of Ciphers based on the Geometric Structure of the Laguerre and Minkowski Planes
20 pages, 5 figures. arXiv admin note: text overlap with arXiv:2102.10321
null
null
null
cs.CR math.CO
http://creativecommons.org/licenses/by/4.0/
Till now geometric structures don't play a major role in cryptography. Gilbert, MacWilliams and Sloane introduced an authentication scheme in the projective plane and showed its perfectness in the sense of Shannon. In arXiv:2102.10321 we introduced an encryption scheme in the M\"obius plane and showed that it fulfills Shannon's requirement of perfectness in first approximation and also the requirement of completeness according to Kam and Davida. In this paper we will apply a similar approach to define encryption schemes in the geometries of the Laguerre plande and the Minkowski plane. We will show that the encryption scheme in the Laguerre geometry meets Shannon's requirement of perfectness sharp and that the encryption scheme in the Minkowski geometry meets this requirement in first approximation. The Laguerre cipher also fulfills the requirement of completeness according to Kam and Davida.
[ { "created": "Sun, 7 Mar 2021 18:59:30 GMT", "version": "v1" } ]
2021-03-09
[ [ "Capellaro", "Christoph", "" ] ]
Till now geometric structures don't play a major role in cryptography. Gilbert, MacWilliams and Sloane introduced an authentication scheme in the projective plane and showed its perfectness in the sense of Shannon. In arXiv:2102.10321 we introduced an encryption scheme in the M\"obius plane and showed that it fulfills Shannon's requirement of perfectness in first approximation and also the requirement of completeness according to Kam and Davida. In this paper we will apply a similar approach to define encryption schemes in the geometries of the Laguerre plande and the Minkowski plane. We will show that the encryption scheme in the Laguerre geometry meets Shannon's requirement of perfectness sharp and that the encryption scheme in the Minkowski geometry meets this requirement in first approximation. The Laguerre cipher also fulfills the requirement of completeness according to Kam and Davida.
0708.1116
Florian Simatos
Florian Simatos
A variant of the Recoil Growth algorithm to generate multi-polymer systems
Title changed
null
null
null
cs.CE cond-mat.stat-mech
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Recoil Growth algorithm, proposed in 1999 by Consta et al., is one of the most efficient algorithm available in the literature to sample from a multi-polymer system. Such problems are closely related to the generation of self-avoiding paths. In this paper, we study a variant of the original Recoil Growth algorithm, where we constrain the generation of a new polymer to take place on a specific class of graphs. This makes it possible to make a fine trade-off between computational cost and success rate. We moreover give a simple proof for a lower bound on the irreducibility of this new algorithm, which applies to the original algorithm as well.
[ { "created": "Wed, 8 Aug 2007 15:07:53 GMT", "version": "v1" }, { "created": "Fri, 12 Sep 2008 10:02:19 GMT", "version": "v2" }, { "created": "Thu, 2 Jul 2009 12:31:15 GMT", "version": "v3" } ]
2009-07-02
[ [ "Simatos", "Florian", "" ] ]
The Recoil Growth algorithm, proposed in 1999 by Consta et al., is one of the most efficient algorithm available in the literature to sample from a multi-polymer system. Such problems are closely related to the generation of self-avoiding paths. In this paper, we study a variant of the original Recoil Growth algorithm, where we constrain the generation of a new polymer to take place on a specific class of graphs. This makes it possible to make a fine trade-off between computational cost and success rate. We moreover give a simple proof for a lower bound on the irreducibility of this new algorithm, which applies to the original algorithm as well.
2108.11014
Huiqun Wang
Huiqun Wang, Ruijie Yang, Di Huang and Yunhong Wang
iDARTS: Improving DARTS by Node Normalization and Decorrelation Discretization
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
Differentiable ARchiTecture Search (DARTS) uses a continuous relaxation of network representation and dramatically accelerates Neural Architecture Search (NAS) by almost thousands of times in GPU-day. However, the searching process of DARTS is unstable, which suffers severe degradation when training epochs become large, thus limiting its application. In this paper, we claim that this degradation issue is caused by the imbalanced norms between different nodes and the highly correlated outputs from various operations. We then propose an improved version of DARTS, namely iDARTS, to deal with the two problems. In the training phase, it introduces node normalization to maintain the norm balance. In the discretization phase, the continuous architecture is approximated based on the similarity between the outputs of the node and the decorrelated operations rather than the values of the architecture parameters. Extensive evaluation is conducted on CIFAR-10 and ImageNet, and the error rates of 2.25\% and 24.7\% are reported within 0.2 and 1.9 GPU-day for architecture search respectively, which shows its effectiveness. Additional analysis also reveals that iDARTS has the advantage in robustness and generalization over other DARTS-based counterparts.
[ { "created": "Wed, 25 Aug 2021 02:23:30 GMT", "version": "v1" } ]
2021-08-26
[ [ "Wang", "Huiqun", "" ], [ "Yang", "Ruijie", "" ], [ "Huang", "Di", "" ], [ "Wang", "Yunhong", "" ] ]
Differentiable ARchiTecture Search (DARTS) uses a continuous relaxation of network representation and dramatically accelerates Neural Architecture Search (NAS) by almost thousands of times in GPU-day. However, the searching process of DARTS is unstable, which suffers severe degradation when training epochs become large, thus limiting its application. In this paper, we claim that this degradation issue is caused by the imbalanced norms between different nodes and the highly correlated outputs from various operations. We then propose an improved version of DARTS, namely iDARTS, to deal with the two problems. In the training phase, it introduces node normalization to maintain the norm balance. In the discretization phase, the continuous architecture is approximated based on the similarity between the outputs of the node and the decorrelated operations rather than the values of the architecture parameters. Extensive evaluation is conducted on CIFAR-10 and ImageNet, and the error rates of 2.25\% and 24.7\% are reported within 0.2 and 1.9 GPU-day for architecture search respectively, which shows its effectiveness. Additional analysis also reveals that iDARTS has the advantage in robustness and generalization over other DARTS-based counterparts.
1611.06694
Akshayvarun Subramanya
Suraj Srinivas, Akshayvarun Subramanya, R. Venkatesh Babu
Training Sparse Neural Networks
null
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep neural networks with lots of parameters are typically used for large-scale computer vision tasks such as image classification. This is a result of using dense matrix multiplications and convolutions. However, sparse computations are known to be much more efficient. In this work, we train and build neural networks which implicitly use sparse computations. We introduce additional gate variables to perform parameter selection and show that this is equivalent to using a spike-and-slab prior. We experimentally validate our method on both small and large networks and achieve state-of-the-art compression results for sparse neural network models.
[ { "created": "Mon, 21 Nov 2016 09:24:24 GMT", "version": "v1" } ]
2016-11-22
[ [ "Srinivas", "Suraj", "" ], [ "Subramanya", "Akshayvarun", "" ], [ "Babu", "R. Venkatesh", "" ] ]
Deep neural networks with lots of parameters are typically used for large-scale computer vision tasks such as image classification. This is a result of using dense matrix multiplications and convolutions. However, sparse computations are known to be much more efficient. In this work, we train and build neural networks which implicitly use sparse computations. We introduce additional gate variables to perform parameter selection and show that this is equivalent to using a spike-and-slab prior. We experimentally validate our method on both small and large networks and achieve state-of-the-art compression results for sparse neural network models.
1209.2890
Giulio Manzonetto
Thomas Ehrhard (CNRS), Antonio Bucciarelli (Universite Paris Diderot), Alberto Carraro (Universita Ca' Foscari), Giulio Manzonetto (Universite Paris Nord)
Full Abstraction for the Resource Lambda Calculus with Tests, through Taylor Expansion
null
Logical Methods in Computer Science, Volume 8, Issue 4 (October 10, 2012) lmcs:1047
10.2168/LMCS-8(4:3)2012
null
cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the semantics of a resource-sensitive extension of the lambda calculus in a canonical reflexive object of a category of sets and relations, a relational version of Scott's original model of the pure lambda calculus. This calculus is related to Boudol's resource calculus and is derived from Ehrhard and Regnier's differential extension of Linear Logic and of the lambda calculus. We extend it with new constructions, to be understood as implementing a very simple exception mechanism, and with a "must" parallel composition. These new operations allow to associate a context of this calculus with any point of the model and to prove full abstraction for the finite sub-calculus where ordinary lambda calculus application is not allowed. The result is then extended to the full calculus by means of a Taylor Expansion formula. As an intermediate result we prove that the exception mechanism is not essential in the finite sub-calculus.
[ { "created": "Thu, 13 Sep 2012 13:45:55 GMT", "version": "v1" }, { "created": "Mon, 8 Oct 2012 20:21:59 GMT", "version": "v2" } ]
2015-07-01
[ [ "Ehrhard", "Thomas", "", "CNRS" ], [ "Bucciarelli", "Antonio", "", "Universite Paris Diderot" ], [ "Carraro", "Alberto", "", "Universita Ca' Foscari" ], [ "Manzonetto", "Giulio", "", "Universite Paris\n Nord" ] ]
We study the semantics of a resource-sensitive extension of the lambda calculus in a canonical reflexive object of a category of sets and relations, a relational version of Scott's original model of the pure lambda calculus. This calculus is related to Boudol's resource calculus and is derived from Ehrhard and Regnier's differential extension of Linear Logic and of the lambda calculus. We extend it with new constructions, to be understood as implementing a very simple exception mechanism, and with a "must" parallel composition. These new operations allow to associate a context of this calculus with any point of the model and to prove full abstraction for the finite sub-calculus where ordinary lambda calculus application is not allowed. The result is then extended to the full calculus by means of a Taylor Expansion formula. As an intermediate result we prove that the exception mechanism is not essential in the finite sub-calculus.
1902.02256
Kaveh Fathian
Kaveh Fathian, Kasra Khosoussi, Yulun Tian, Parker Lusk, Jonathan P. How
CLEAR: A Consistent Lifting, Embedding, and Alignment Rectification Algorithm for Multi-View Data Association
null
null
null
null
cs.RO cs.CV cs.MA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many robotics applications require alignment and fusion of observations obtained at multiple views to form a global model of the environment. Multi-way data association methods provide a mechanism to improve alignment accuracy of pairwise associations and ensure their consistency. However, existing methods that solve this computationally challenging problem are often too slow for real-time applications. Furthermore, some of the existing techniques can violate the cycle consistency principle, thus drastically reducing the fusion accuracy. This work presents the CLEAR (Consistent Lifting, Embedding, and Alignment Rectification) algorithm to address these issues. By leveraging insights from the multi-way matching and spectral graph clustering literature, CLEAR provides cycle consistent and accurate solutions in a computationally efficient manner. Numerical experiments on both synthetic and real datasets are carried out to demonstrate the scalability and superior performance of our algorithm in real-world problems. This algorithmic framework can provide significant improvement in the accuracy and efficiency of existing discrete assignment problems, which traditionally use pairwise (but potentially inconsistent) correspondences. An implementation of CLEAR is made publicly available online.
[ { "created": "Wed, 6 Feb 2019 16:12:57 GMT", "version": "v1" }, { "created": "Wed, 31 Jul 2019 21:39:56 GMT", "version": "v2" }, { "created": "Wed, 4 Mar 2020 23:08:51 GMT", "version": "v3" } ]
2020-03-06
[ [ "Fathian", "Kaveh", "" ], [ "Khosoussi", "Kasra", "" ], [ "Tian", "Yulun", "" ], [ "Lusk", "Parker", "" ], [ "How", "Jonathan P.", "" ] ]
Many robotics applications require alignment and fusion of observations obtained at multiple views to form a global model of the environment. Multi-way data association methods provide a mechanism to improve alignment accuracy of pairwise associations and ensure their consistency. However, existing methods that solve this computationally challenging problem are often too slow for real-time applications. Furthermore, some of the existing techniques can violate the cycle consistency principle, thus drastically reducing the fusion accuracy. This work presents the CLEAR (Consistent Lifting, Embedding, and Alignment Rectification) algorithm to address these issues. By leveraging insights from the multi-way matching and spectral graph clustering literature, CLEAR provides cycle consistent and accurate solutions in a computationally efficient manner. Numerical experiments on both synthetic and real datasets are carried out to demonstrate the scalability and superior performance of our algorithm in real-world problems. This algorithmic framework can provide significant improvement in the accuracy and efficiency of existing discrete assignment problems, which traditionally use pairwise (but potentially inconsistent) correspondences. An implementation of CLEAR is made publicly available online.
2307.13510
Xi Li
Yiming Wu, Ruixiang Li, Zequn Qin, Xinhai Zhao, Xi Li
HeightFormer: Explicit Height Modeling without Extra Data for Camera-only 3D Object Detection in Bird's Eye View
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Vision-based Bird's Eye View (BEV) representation is an emerging perception formulation for autonomous driving. The core challenge is to construct BEV space with multi-camera features, which is a one-to-many ill-posed problem. Diving into all previous BEV representation generation methods, we found that most of them fall into two types: modeling depths in image views or modeling heights in the BEV space, mostly in an implicit way. In this work, we propose to explicitly model heights in the BEV space, which needs no extra data like LiDAR and can fit arbitrary camera rigs and types compared to modeling depths. Theoretically, we give proof of the equivalence between height-based methods and depth-based methods. Considering the equivalence and some advantages of modeling heights, we propose HeightFormer, which models heights and uncertainties in a self-recursive way. Without any extra data, the proposed HeightFormer could estimate heights in BEV accurately. Benchmark results show that the performance of HeightFormer achieves SOTA compared with those camera-only methods.
[ { "created": "Tue, 25 Jul 2023 14:02:02 GMT", "version": "v1" }, { "created": "Wed, 13 Mar 2024 04:09:04 GMT", "version": "v2" }, { "created": "Tue, 16 Jul 2024 02:10:46 GMT", "version": "v3" } ]
2024-07-17
[ [ "Wu", "Yiming", "" ], [ "Li", "Ruixiang", "" ], [ "Qin", "Zequn", "" ], [ "Zhao", "Xinhai", "" ], [ "Li", "Xi", "" ] ]
Vision-based Bird's Eye View (BEV) representation is an emerging perception formulation for autonomous driving. The core challenge is to construct BEV space with multi-camera features, which is a one-to-many ill-posed problem. Diving into all previous BEV representation generation methods, we found that most of them fall into two types: modeling depths in image views or modeling heights in the BEV space, mostly in an implicit way. In this work, we propose to explicitly model heights in the BEV space, which needs no extra data like LiDAR and can fit arbitrary camera rigs and types compared to modeling depths. Theoretically, we give proof of the equivalence between height-based methods and depth-based methods. Considering the equivalence and some advantages of modeling heights, we propose HeightFormer, which models heights and uncertainties in a self-recursive way. Without any extra data, the proposed HeightFormer could estimate heights in BEV accurately. Benchmark results show that the performance of HeightFormer achieves SOTA compared with those camera-only methods.
2309.11276
Jun Zhang
Kuan Tian, Yonghang Guan, Jinxi Xiang, Jun Zhang, Xiao Han, Wei Yang
Towards Real-Time Neural Video Codec for Cross-Platform Application Using Calibration Information
14 pages
null
10.1145/3581783.3611955
null
cs.CV eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The state-of-the-art neural video codecs have outperformed the most sophisticated traditional codecs in terms of RD performance in certain cases. However, utilizing them for practical applications is still challenging for two major reasons. 1) Cross-platform computational errors resulting from floating point operations can lead to inaccurate decoding of the bitstream. 2) The high computational complexity of the encoding and decoding process poses a challenge in achieving real-time performance. In this paper, we propose a real-time cross-platform neural video codec, which is capable of efficiently decoding of 720P video bitstream from other encoding platforms on a consumer-grade GPU. First, to solve the problem of inconsistency of codec caused by the uncertainty of floating point calculations across platforms, we design a calibration transmitting system to guarantee the consistent quantization of entropy parameters between the encoding and decoding stages. The parameters that may have transboundary quantization between encoding and decoding are identified in the encoding stage, and their coordinates will be delivered by auxiliary transmitted bitstream. By doing so, these inconsistent parameters can be processed properly in the decoding stage. Furthermore, to reduce the bitrate of the auxiliary bitstream, we rectify the distribution of entropy parameters using a piecewise Gaussian constraint. Second, to match the computational limitations on the decoding side for real-time video codec, we design a lightweight model. A series of efficiency techniques enable our model to achieve 25 FPS decoding speed on NVIDIA RTX 2080 GPU. Experimental results demonstrate that our model can achieve real-time decoding of 720P videos while encoding on another platform. Furthermore, the real-time model brings up to a maximum of 24.2\% BD-rate improvement from the perspective of PSNR with the anchor H.265.
[ { "created": "Wed, 20 Sep 2023 13:01:15 GMT", "version": "v1" } ]
2023-09-21
[ [ "Tian", "Kuan", "" ], [ "Guan", "Yonghang", "" ], [ "Xiang", "Jinxi", "" ], [ "Zhang", "Jun", "" ], [ "Han", "Xiao", "" ], [ "Yang", "Wei", "" ] ]
The state-of-the-art neural video codecs have outperformed the most sophisticated traditional codecs in terms of RD performance in certain cases. However, utilizing them for practical applications is still challenging for two major reasons. 1) Cross-platform computational errors resulting from floating point operations can lead to inaccurate decoding of the bitstream. 2) The high computational complexity of the encoding and decoding process poses a challenge in achieving real-time performance. In this paper, we propose a real-time cross-platform neural video codec, which is capable of efficiently decoding of 720P video bitstream from other encoding platforms on a consumer-grade GPU. First, to solve the problem of inconsistency of codec caused by the uncertainty of floating point calculations across platforms, we design a calibration transmitting system to guarantee the consistent quantization of entropy parameters between the encoding and decoding stages. The parameters that may have transboundary quantization between encoding and decoding are identified in the encoding stage, and their coordinates will be delivered by auxiliary transmitted bitstream. By doing so, these inconsistent parameters can be processed properly in the decoding stage. Furthermore, to reduce the bitrate of the auxiliary bitstream, we rectify the distribution of entropy parameters using a piecewise Gaussian constraint. Second, to match the computational limitations on the decoding side for real-time video codec, we design a lightweight model. A series of efficiency techniques enable our model to achieve 25 FPS decoding speed on NVIDIA RTX 2080 GPU. Experimental results demonstrate that our model can achieve real-time decoding of 720P videos while encoding on another platform. Furthermore, the real-time model brings up to a maximum of 24.2\% BD-rate improvement from the perspective of PSNR with the anchor H.265.
1906.07029
Jan Quenzel
Radu Alexandru Rosu and Jan Quenzel and Sven Behnke
Semi-Supervised Semantic Mapping through Label Propagation with Semantic Texture Meshes
This is a pre-print of an article published in International Journal of Computer Vision (IJCV, 2019)
null
10.1007/s11263-019-01187-z
null
cs.CV cs.LG cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Scene understanding is an important capability for robots acting in unstructured environments. While most SLAM approaches provide a geometrical representation of the scene, a semantic map is necessary for more complex interactions with the surroundings. Current methods treat the semantic map as part of the geometry which limits scalability and accuracy. We propose to represent the semantic map as a geometrical mesh and a semantic texture coupled at independent resolution. The key idea is that in many environments the geometry can be greatly simplified without loosing fidelity, while semantic information can be stored at a higher resolution, independent of the mesh. We construct a mesh from depth sensors to represent the scene geometry and fuse information into the semantic texture from segmentations of individual RGB views of the scene. Making the semantics persistent in a global mesh enables us to enforce temporal and spatial consistency of the individual view predictions. For this, we propose an efficient method of establishing consensus between individual segmentations by iteratively retraining semantic segmentation with the information stored within the map and using the retrained segmentation to re-fuse the semantics. We demonstrate the accuracy and scalability of our approach by reconstructing semantic maps of scenes from NYUv2 and a scene spanning large buildings.
[ { "created": "Mon, 17 Jun 2019 13:36:21 GMT", "version": "v1" } ]
2019-06-18
[ [ "Rosu", "Radu Alexandru", "" ], [ "Quenzel", "Jan", "" ], [ "Behnke", "Sven", "" ] ]
Scene understanding is an important capability for robots acting in unstructured environments. While most SLAM approaches provide a geometrical representation of the scene, a semantic map is necessary for more complex interactions with the surroundings. Current methods treat the semantic map as part of the geometry which limits scalability and accuracy. We propose to represent the semantic map as a geometrical mesh and a semantic texture coupled at independent resolution. The key idea is that in many environments the geometry can be greatly simplified without loosing fidelity, while semantic information can be stored at a higher resolution, independent of the mesh. We construct a mesh from depth sensors to represent the scene geometry and fuse information into the semantic texture from segmentations of individual RGB views of the scene. Making the semantics persistent in a global mesh enables us to enforce temporal and spatial consistency of the individual view predictions. For this, we propose an efficient method of establishing consensus between individual segmentations by iteratively retraining semantic segmentation with the information stored within the map and using the retrained segmentation to re-fuse the semantics. We demonstrate the accuracy and scalability of our approach by reconstructing semantic maps of scenes from NYUv2 and a scene spanning large buildings.
2003.02287
Thodoris Lykouris
Thodoris Lykouris, Vahab Mirrokni, Renato Paes Leme
Bandits with adversarial scaling
Appeared in ICML 2020
null
null
null
cs.LG cs.GT stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study "adversarial scaling", a multi-armed bandit model where rewards have a stochastic and an adversarial component. Our model captures display advertising where the "click-through-rate" can be decomposed to a (fixed across time) arm-quality component and a non-stochastic user-relevance component (fixed across arms). Despite the relative stochasticity of our model, we demonstrate two settings where most bandit algorithms suffer. On the positive side, we show that two algorithms, one from the action elimination and one from the mirror descent family are adaptive enough to be robust to adversarial scaling. Our results shed light on the robustness of adaptive parameter selection in stochastic bandits, which may be of independent interest.
[ { "created": "Wed, 4 Mar 2020 19:03:23 GMT", "version": "v1" }, { "created": "Sat, 29 Aug 2020 03:07:22 GMT", "version": "v2" } ]
2020-09-01
[ [ "Lykouris", "Thodoris", "" ], [ "Mirrokni", "Vahab", "" ], [ "Leme", "Renato Paes", "" ] ]
We study "adversarial scaling", a multi-armed bandit model where rewards have a stochastic and an adversarial component. Our model captures display advertising where the "click-through-rate" can be decomposed to a (fixed across time) arm-quality component and a non-stochastic user-relevance component (fixed across arms). Despite the relative stochasticity of our model, we demonstrate two settings where most bandit algorithms suffer. On the positive side, we show that two algorithms, one from the action elimination and one from the mirror descent family are adaptive enough to be robust to adversarial scaling. Our results shed light on the robustness of adaptive parameter selection in stochastic bandits, which may be of independent interest.
1707.08752
EPTCS
Wesley H. Holliday (University of California, Berkeley), Thomas F. Icard III (Stanford University)
Indicative Conditionals and Dynamic Epistemic Logic
In Proceedings TARK 2017, arXiv:1707.08250
EPTCS 251, 2017, pp. 337-351
10.4204/EPTCS.251.24
null
cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent ideas about epistemic modals and indicative conditionals in formal semantics have significant overlap with ideas in modal logic and dynamic epistemic logic. The purpose of this paper is to show how greater interaction between formal semantics and dynamic epistemic logic in this area can be of mutual benefit. In one direction, we show how concepts and tools from modal logic and dynamic epistemic logic can be used to give a simple, complete axiomatization of Yalcin's [16] semantic consequence relation for a language with epistemic modals and indicative conditionals. In the other direction, the formal semantics for indicative conditionals due to Kolodny and MacFarlane [9] gives rise to a new dynamic operator that is very natural from the point of view of dynamic epistemic logic, allowing succinct expression of dependence (as in dependence logic) or supervenience statements. We prove decidability for the logic with epistemic modals and Kolodny and MacFarlane's indicative conditional via a full and faithful computable translation from their logic to the modal logic K45.
[ { "created": "Thu, 27 Jul 2017 07:51:12 GMT", "version": "v1" } ]
2017-08-07
[ [ "Holliday", "Wesley H.", "", "University of California, Berkeley" ], [ "Icard", "Thomas F.", "III", "Stanford University" ] ]
Recent ideas about epistemic modals and indicative conditionals in formal semantics have significant overlap with ideas in modal logic and dynamic epistemic logic. The purpose of this paper is to show how greater interaction between formal semantics and dynamic epistemic logic in this area can be of mutual benefit. In one direction, we show how concepts and tools from modal logic and dynamic epistemic logic can be used to give a simple, complete axiomatization of Yalcin's [16] semantic consequence relation for a language with epistemic modals and indicative conditionals. In the other direction, the formal semantics for indicative conditionals due to Kolodny and MacFarlane [9] gives rise to a new dynamic operator that is very natural from the point of view of dynamic epistemic logic, allowing succinct expression of dependence (as in dependence logic) or supervenience statements. We prove decidability for the logic with epistemic modals and Kolodny and MacFarlane's indicative conditional via a full and faithful computable translation from their logic to the modal logic K45.
2307.09416
Lorenzo Baraldi
Federico Betti, Jacopo Staiano, Lorenzo Baraldi, Lorenzo Baraldi, Rita Cucchiara, Nicu Sebe
Let's ViCE! Mimicking Human Cognitive Behavior in Image Generation Evaluation
Accepted as oral at ACM MultiMedia 2023 (Brave New Ideas track)
null
null
null
cs.CV cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Research in Image Generation has recently made significant progress, particularly boosted by the introduction of Vision-Language models which are able to produce high-quality visual content based on textual inputs. Despite ongoing advancements in terms of generation quality and realism, no methodical frameworks have been defined yet to quantitatively measure the quality of the generated content and the adherence with the prompted requests: so far, only human-based evaluations have been adopted for quality satisfaction and for comparing different generative methods. We introduce a novel automated method for Visual Concept Evaluation (ViCE), i.e. to assess consistency between a generated/edited image and the corresponding prompt/instructions, with a process inspired by the human cognitive behaviour. ViCE combines the strengths of Large Language Models (LLMs) and Visual Question Answering (VQA) into a unified pipeline, aiming to replicate the human cognitive process in quality assessment. This method outlines visual concepts, formulates image-specific verification questions, utilizes the Q&A system to investigate the image, and scores the combined outcome. Although this brave new hypothesis of mimicking humans in the image evaluation process is in its preliminary assessment stage, results are promising and open the door to a new form of automatic evaluation which could have significant impact as the image generation or the image target editing tasks become more and more sophisticated.
[ { "created": "Tue, 18 Jul 2023 16:33:30 GMT", "version": "v1" }, { "created": "Wed, 19 Jul 2023 08:27:50 GMT", "version": "v2" } ]
2023-07-20
[ [ "Betti", "Federico", "" ], [ "Staiano", "Jacopo", "" ], [ "Baraldi", "Lorenzo", "" ], [ "Baraldi", "Lorenzo", "" ], [ "Cucchiara", "Rita", "" ], [ "Sebe", "Nicu", "" ] ]
Research in Image Generation has recently made significant progress, particularly boosted by the introduction of Vision-Language models which are able to produce high-quality visual content based on textual inputs. Despite ongoing advancements in terms of generation quality and realism, no methodical frameworks have been defined yet to quantitatively measure the quality of the generated content and the adherence with the prompted requests: so far, only human-based evaluations have been adopted for quality satisfaction and for comparing different generative methods. We introduce a novel automated method for Visual Concept Evaluation (ViCE), i.e. to assess consistency between a generated/edited image and the corresponding prompt/instructions, with a process inspired by the human cognitive behaviour. ViCE combines the strengths of Large Language Models (LLMs) and Visual Question Answering (VQA) into a unified pipeline, aiming to replicate the human cognitive process in quality assessment. This method outlines visual concepts, formulates image-specific verification questions, utilizes the Q&A system to investigate the image, and scores the combined outcome. Although this brave new hypothesis of mimicking humans in the image evaluation process is in its preliminary assessment stage, results are promising and open the door to a new form of automatic evaluation which could have significant impact as the image generation or the image target editing tasks become more and more sophisticated.
1709.02699
Aditya Shukla
Aditya Shukla and Udayan Ganguly
An On-chip Trainable and Clock-less Spiking Neural Network with 1R Memristive Synapses
null
null
10.1109/TBCAS.2018.2831618
null
cs.NE cs.ET
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Spiking neural networks (SNNs) are being explored in an attempt to mimic brain's capability to learn and recognize at low power. Crossbar architecture with highly scalable Resistive RAM or RRAM array serving as synaptic weights and neuronal drivers in the periphery is an attractive option for SNN. Recognition (akin to reading the synaptic weight) requires small amplitude bias applied across the RRAM to minimize conductance change. Learning (akin to writing or updating the synaptic weight) requires large amplitude bias pulses to produce a conductance change. The contradictory bias amplitude requirement to perform reading and writing simultaneously and asynchronously, akin to biology, is a major challenge. Solutions suggested in the literature rely on time-division-multiplexing of read and write operations based on clocks, or approximations ignoring the reading when coincidental with writing. In this work, we overcome this challenge and present a clock-less approach wherein reading and writing are performed in different frequency domains. This enables learning and recognition simultaneously on an SNN. We validate our scheme in SPICE circuit simulator by translating a two-layered feed-forward Iris classifying SNN to demonstrate software-equivalent performance. The system performance is not adversely affected by a voltage dependence of conductance in realistic RRAMs, despite departing from linearity. Overall, our approach enables direct implementation of biological SNN algorithms in hardware.
[ { "created": "Fri, 8 Sep 2017 13:38:44 GMT", "version": "v1" }, { "created": "Fri, 3 Nov 2017 13:20:29 GMT", "version": "v2" } ]
2018-08-08
[ [ "Shukla", "Aditya", "" ], [ "Ganguly", "Udayan", "" ] ]
Spiking neural networks (SNNs) are being explored in an attempt to mimic brain's capability to learn and recognize at low power. Crossbar architecture with highly scalable Resistive RAM or RRAM array serving as synaptic weights and neuronal drivers in the periphery is an attractive option for SNN. Recognition (akin to reading the synaptic weight) requires small amplitude bias applied across the RRAM to minimize conductance change. Learning (akin to writing or updating the synaptic weight) requires large amplitude bias pulses to produce a conductance change. The contradictory bias amplitude requirement to perform reading and writing simultaneously and asynchronously, akin to biology, is a major challenge. Solutions suggested in the literature rely on time-division-multiplexing of read and write operations based on clocks, or approximations ignoring the reading when coincidental with writing. In this work, we overcome this challenge and present a clock-less approach wherein reading and writing are performed in different frequency domains. This enables learning and recognition simultaneously on an SNN. We validate our scheme in SPICE circuit simulator by translating a two-layered feed-forward Iris classifying SNN to demonstrate software-equivalent performance. The system performance is not adversely affected by a voltage dependence of conductance in realistic RRAMs, despite departing from linearity. Overall, our approach enables direct implementation of biological SNN algorithms in hardware.
1601.03461
Nasim Ferdosian
Nasim Ferdosian, Mohamed Othman, Borhanuddin Mohd Ali, Kweh Yeah Lun
Greedy-Knapsack Algorithm for Optimal Downlink Resource Allocation in LTE Networks
Wireless Networks, 2015
null
10.1007/s11276-015-1042-9
null
cs.NI cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Long Term Evolution (LTE) as a mobile broadband technology supports a wide domain of communication services with different requirements. Therefore, scheduling of all flows from various applications in overload states in which the requested amount of bandwidth exceeds the limited available spectrum resources is a challenging issue. Accordingly, in this paper, a greedy algorithm is presented to evaluate user candidates which are waiting for scheduling and select an optimal set of the users to maximize system performance, without exceeding available bandwidth capacity. The greedy-knapsack algorithm is defined as an optimal solution to the resource allocation problem, formulated based on the fractional knapsack problem. A compromise between throughput and QoS provisioning is obtained by proposing a class-based ranking function, which is a combination of throughput and QoS related parameters defined for each application. The simulation results show that the proposed method provides high performance in terms of throughput, loss and delay for different classes of QoS over the existing ones, especially under overload traffic.
[ { "created": "Thu, 14 Jan 2016 01:33:21 GMT", "version": "v1" } ]
2016-01-15
[ [ "Ferdosian", "Nasim", "" ], [ "Othman", "Mohamed", "" ], [ "Ali", "Borhanuddin Mohd", "" ], [ "Lun", "Kweh Yeah", "" ] ]
The Long Term Evolution (LTE) as a mobile broadband technology supports a wide domain of communication services with different requirements. Therefore, scheduling of all flows from various applications in overload states in which the requested amount of bandwidth exceeds the limited available spectrum resources is a challenging issue. Accordingly, in this paper, a greedy algorithm is presented to evaluate user candidates which are waiting for scheduling and select an optimal set of the users to maximize system performance, without exceeding available bandwidth capacity. The greedy-knapsack algorithm is defined as an optimal solution to the resource allocation problem, formulated based on the fractional knapsack problem. A compromise between throughput and QoS provisioning is obtained by proposing a class-based ranking function, which is a combination of throughput and QoS related parameters defined for each application. The simulation results show that the proposed method provides high performance in terms of throughput, loss and delay for different classes of QoS over the existing ones, especially under overload traffic.
2110.08378
Jun Luo
Jun Luo, Shandong Wu
FedSLD: Federated Learning with Shared Label Distribution for Medical Image Classification
10 pages
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Machine learning in medical research, by nature, needs careful attention on obeying the regulations of data privacy, making it difficult to train a machine learning model over gathered data from different medical centers. Failure of leveraging data of the same kind may result in poor generalizability for the trained model. Federated learning (FL) enables collaboratively training a joint model while keeping the data decentralized for multiple medical centers. However, federated optimizations often suffer from the heterogeneity of the data distribution across medical centers. In this work, we propose Federated Learning with Shared Label Distribution (FedSLD) for classification tasks, a method that assumes knowledge of the label distributions for all the participating clients in the federation. FedSLD adjusts the contribution of each data sample to the local objective during optimization given knowledge of the distribution, mitigating the instability brought by data heterogeneity across all clients. We conduct extensive experiments on four publicly available image datasets with different types of non-IID data distributions. Our results show that FedSLD achieves better convergence performance than the compared leading FL optimization algorithms, increasing the test accuracy by up to 5.50 percentage points.
[ { "created": "Fri, 15 Oct 2021 21:38:25 GMT", "version": "v1" } ]
2021-10-19
[ [ "Luo", "Jun", "" ], [ "Wu", "Shandong", "" ] ]
Machine learning in medical research, by nature, needs careful attention on obeying the regulations of data privacy, making it difficult to train a machine learning model over gathered data from different medical centers. Failure of leveraging data of the same kind may result in poor generalizability for the trained model. Federated learning (FL) enables collaboratively training a joint model while keeping the data decentralized for multiple medical centers. However, federated optimizations often suffer from the heterogeneity of the data distribution across medical centers. In this work, we propose Federated Learning with Shared Label Distribution (FedSLD) for classification tasks, a method that assumes knowledge of the label distributions for all the participating clients in the federation. FedSLD adjusts the contribution of each data sample to the local objective during optimization given knowledge of the distribution, mitigating the instability brought by data heterogeneity across all clients. We conduct extensive experiments on four publicly available image datasets with different types of non-IID data distributions. Our results show that FedSLD achieves better convergence performance than the compared leading FL optimization algorithms, increasing the test accuracy by up to 5.50 percentage points.
1503.07463
Alexander Barvinok
Alexander Barvinok
Computing the partition function of a polynomial on the Boolean cube
The final version of this paper is due to be published in the collection of papers "A Journey through Discrete Mathematics. A Tribute to Jiri Matousek" edited by Martin Loebl, Jaroslav Nesetril and Robin Thomas, to be published by Springer
null
null
null
cs.DS math.CO math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
For a polynomial f: {-1, 1}^n --> C, we define the partition function as the average of e^{lambda f(x)} over all points x in {-1, 1}^n, where lambda in C is a parameter. We present a quasi-polynomial algorithm, which, given such f, lambda and epsilon >0 approximates the partition function within a relative error of epsilon in N^{O(ln n -ln epsilon)} time provided |lambda| < 1/(2 L sqrt{deg f}), where L=L(f) is a parameter bounding the Lipschitz constant of f from above and N is the number of monomials in f. As a corollary, we obtain a quasi-polynomial algorithm, which, given such an f with coefficients +1 and -1 and such that every variable enters not more than 4 monomials, approximates the maximum of f on {-1, 1}^n within a factor of O(sqrt{deg f}/delta), provided the maximum is N delta for some 0< delta <1. If every variable enters not more than k monomials for some fixed k > 4, we are able to establish a similar result when delta > (k-1)/k.
[ { "created": "Wed, 25 Mar 2015 17:08:37 GMT", "version": "v1" }, { "created": "Tue, 19 May 2015 23:16:38 GMT", "version": "v2" }, { "created": "Wed, 11 May 2016 18:18:29 GMT", "version": "v3" }, { "created": "Tue, 29 Nov 2016 02:22:00 GMT", "version": "v4" } ]
2016-11-30
[ [ "Barvinok", "Alexander", "" ] ]
For a polynomial f: {-1, 1}^n --> C, we define the partition function as the average of e^{lambda f(x)} over all points x in {-1, 1}^n, where lambda in C is a parameter. We present a quasi-polynomial algorithm, which, given such f, lambda and epsilon >0 approximates the partition function within a relative error of epsilon in N^{O(ln n -ln epsilon)} time provided |lambda| < 1/(2 L sqrt{deg f}), where L=L(f) is a parameter bounding the Lipschitz constant of f from above and N is the number of monomials in f. As a corollary, we obtain a quasi-polynomial algorithm, which, given such an f with coefficients +1 and -1 and such that every variable enters not more than 4 monomials, approximates the maximum of f on {-1, 1}^n within a factor of O(sqrt{deg f}/delta), provided the maximum is N delta for some 0< delta <1. If every variable enters not more than k monomials for some fixed k > 4, we are able to establish a similar result when delta > (k-1)/k.
2310.15910
Qinan Yu
Qinan Yu, Jack Merullo, Ellie Pavlick
Characterizing Mechanisms for Factual Recall in Language Models
null
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
Language Models (LMs) often must integrate facts they memorized in pretraining with new information that appears in a given context. These two sources can disagree, causing competition within the model, and it is unclear how an LM will resolve the conflict. On a dataset that queries for knowledge of world capitals, we investigate both distributional and mechanistic determinants of LM behavior in such situations. Specifically, we measure the proportion of the time an LM will use a counterfactual prefix (e.g., "The capital of Poland is London") to overwrite what it learned in pretraining ("Warsaw"). On Pythia and GPT2, the training frequency of both the query country ("Poland") and the in-context city ("London") highly affect the models' likelihood of using the counterfactual. We then use head attribution to identify individual attention heads that either promote the memorized answer or the in-context answer in the logits. By scaling up or down the value vector of these heads, we can control the likelihood of using the in-context answer on new data. This method can increase the rate of generating the in-context answer to 88\% of the time simply by scaling a single head at runtime. Our work contributes to a body of evidence showing that we can often localize model behaviors to specific components and provides a proof of concept for how future methods might control model behavior dynamically at runtime.
[ { "created": "Tue, 24 Oct 2023 15:15:18 GMT", "version": "v1" } ]
2023-10-25
[ [ "Yu", "Qinan", "" ], [ "Merullo", "Jack", "" ], [ "Pavlick", "Ellie", "" ] ]
Language Models (LMs) often must integrate facts they memorized in pretraining with new information that appears in a given context. These two sources can disagree, causing competition within the model, and it is unclear how an LM will resolve the conflict. On a dataset that queries for knowledge of world capitals, we investigate both distributional and mechanistic determinants of LM behavior in such situations. Specifically, we measure the proportion of the time an LM will use a counterfactual prefix (e.g., "The capital of Poland is London") to overwrite what it learned in pretraining ("Warsaw"). On Pythia and GPT2, the training frequency of both the query country ("Poland") and the in-context city ("London") highly affect the models' likelihood of using the counterfactual. We then use head attribution to identify individual attention heads that either promote the memorized answer or the in-context answer in the logits. By scaling up or down the value vector of these heads, we can control the likelihood of using the in-context answer on new data. This method can increase the rate of generating the in-context answer to 88\% of the time simply by scaling a single head at runtime. Our work contributes to a body of evidence showing that we can often localize model behaviors to specific components and provides a proof of concept for how future methods might control model behavior dynamically at runtime.
2305.04926
Amy Lin
Amy Lin, Jason Y. Zhang, Deva Ramanan, Shubham Tulsiani
RelPose++: Recovering 6D Poses from Sparse-view Observations
Project webpage: https://amyxlase.github.io/relpose-plus-plus (Accepted to 3DV 2024)
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We address the task of estimating 6D camera poses from sparse-view image sets (2-8 images). This task is a vital pre-processing stage for nearly all contemporary (neural) reconstruction algorithms but remains challenging given sparse views, especially for objects with visual symmetries and texture-less surfaces. We build on the recent RelPose framework which learns a network that infers distributions over relative rotations over image pairs. We extend this approach in two key ways; first, we use attentional transformer layers to process multiple images jointly, since additional views of an object may resolve ambiguous symmetries in any given image pair (such as the handle of a mug that becomes visible in a third view). Second, we augment this network to also report camera translations by defining an appropriate coordinate system that decouples the ambiguity in rotation estimation from translation prediction. Our final system results in large improvements in 6D pose prediction over prior art on both seen and unseen object categories and also enables pose estimation and 3D reconstruction for in-the-wild objects.
[ { "created": "Mon, 8 May 2023 17:59:58 GMT", "version": "v1" }, { "created": "Mon, 18 Dec 2023 15:49:22 GMT", "version": "v2" } ]
2023-12-19
[ [ "Lin", "Amy", "" ], [ "Zhang", "Jason Y.", "" ], [ "Ramanan", "Deva", "" ], [ "Tulsiani", "Shubham", "" ] ]
We address the task of estimating 6D camera poses from sparse-view image sets (2-8 images). This task is a vital pre-processing stage for nearly all contemporary (neural) reconstruction algorithms but remains challenging given sparse views, especially for objects with visual symmetries and texture-less surfaces. We build on the recent RelPose framework which learns a network that infers distributions over relative rotations over image pairs. We extend this approach in two key ways; first, we use attentional transformer layers to process multiple images jointly, since additional views of an object may resolve ambiguous symmetries in any given image pair (such as the handle of a mug that becomes visible in a third view). Second, we augment this network to also report camera translations by defining an appropriate coordinate system that decouples the ambiguity in rotation estimation from translation prediction. Our final system results in large improvements in 6D pose prediction over prior art on both seen and unseen object categories and also enables pose estimation and 3D reconstruction for in-the-wild objects.
2107.09443
Christopher Rackauckas
Kirill Zubov, Zoe McCarthy, Yingbo Ma, Francesco Calisto, Valerio Pagliarino, Simone Azeglio, Luca Bottero, Emmanuel Luj\'an, Valentin Sulzer, Ashutosh Bharambe, Nand Vinchhi, Kaushik Balakrishnan, Devesh Upadhyay, Chris Rackauckas
NeuralPDE: Automating Physics-Informed Neural Networks (PINNs) with Error Approximations
74 pages, 20+ figures, 20+ tables
null
null
null
cs.MS cs.SC
http://creativecommons.org/licenses/by-nc-sa/4.0/
Physics-informed neural networks (PINNs) are an increasingly powerful way to solve partial differential equations, generate digital twins, and create neural surrogates of physical models. In this manuscript we detail the inner workings of NeuralPDE.jl and show how a formulation structured around numerical quadrature gives rise to new loss functions which allow for adaptivity towards bounded error tolerances. We describe the various ways one can use the tool, detailing mathematical techniques like using extended loss functions for parameter estimation and operator discovery, to help potential users adopt these PINN-based techniques into their workflow. We showcase how NeuralPDE uses a purely symbolic formulation so that all of the underlying training code is generated from an abstract formulation, and show how to make use of GPUs and solve systems of PDEs. Afterwards we give a detailed performance analysis which showcases the trade-off between training techniques on a large set of PDEs. We end by focusing on a complex multiphysics example, the Doyle-Fuller-Newman (DFN) Model, and showcase how this PDE can be formulated and solved with NeuralPDE. Together this manuscript is meant to be a detailed and approachable technical report to help potential users of the technique quickly get a sense of the real-world performance trade-offs and use cases of the PINN techniques.
[ { "created": "Mon, 19 Jul 2021 12:38:31 GMT", "version": "v1" } ]
2021-07-21
[ [ "Zubov", "Kirill", "" ], [ "McCarthy", "Zoe", "" ], [ "Ma", "Yingbo", "" ], [ "Calisto", "Francesco", "" ], [ "Pagliarino", "Valerio", "" ], [ "Azeglio", "Simone", "" ], [ "Bottero", "Luca", "" ], [ "Luján", "Emmanuel", "" ], [ "Sulzer", "Valentin", "" ], [ "Bharambe", "Ashutosh", "" ], [ "Vinchhi", "Nand", "" ], [ "Balakrishnan", "Kaushik", "" ], [ "Upadhyay", "Devesh", "" ], [ "Rackauckas", "Chris", "" ] ]
Physics-informed neural networks (PINNs) are an increasingly powerful way to solve partial differential equations, generate digital twins, and create neural surrogates of physical models. In this manuscript we detail the inner workings of NeuralPDE.jl and show how a formulation structured around numerical quadrature gives rise to new loss functions which allow for adaptivity towards bounded error tolerances. We describe the various ways one can use the tool, detailing mathematical techniques like using extended loss functions for parameter estimation and operator discovery, to help potential users adopt these PINN-based techniques into their workflow. We showcase how NeuralPDE uses a purely symbolic formulation so that all of the underlying training code is generated from an abstract formulation, and show how to make use of GPUs and solve systems of PDEs. Afterwards we give a detailed performance analysis which showcases the trade-off between training techniques on a large set of PDEs. We end by focusing on a complex multiphysics example, the Doyle-Fuller-Newman (DFN) Model, and showcase how this PDE can be formulated and solved with NeuralPDE. Together this manuscript is meant to be a detailed and approachable technical report to help potential users of the technique quickly get a sense of the real-world performance trade-offs and use cases of the PINN techniques.
1905.08041
Rui Portocarrero Sarmento MSc
Rui Portocarrero Sarmento
Inventory Management - A Case Study with NetLogo
null
null
null
null
cs.MA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multi-Agent Systems (MAS) have been applied to several areas or tasks ranging from energy networks controlling to robot soccer teams. MAS are the ideal solution when they provide decision support in situations where human decision and actions are not feasible to operate the system in control and in real-time. Thus, we present a case study that is related to dynamic simulation of an automatic inventory management system. We provide two types of agents, the clients, and the seller agents. Through a system of communication, the agents exchange messages to fulfill their inventory needs. The client agents trade products in quantities according to their needs and rely on seller agents if other clients in the retailer chain cannot provide the needed items. Additionally, it is expected that the trading between a client and the sellers is done through a reverted type of auction. This case study MAS uses BDI and FIPA-ACL in its implementation resulting in a clear simulation of the system. We expect to provide a comparison between two distinct situations. One with only external transactions with providers, and a situation where both internal and external transactions are allowed.
[ { "created": "Wed, 10 Apr 2019 09:33:03 GMT", "version": "v1" } ]
2019-05-21
[ [ "Sarmento", "Rui Portocarrero", "" ] ]
Multi-Agent Systems (MAS) have been applied to several areas or tasks ranging from energy networks controlling to robot soccer teams. MAS are the ideal solution when they provide decision support in situations where human decision and actions are not feasible to operate the system in control and in real-time. Thus, we present a case study that is related to dynamic simulation of an automatic inventory management system. We provide two types of agents, the clients, and the seller agents. Through a system of communication, the agents exchange messages to fulfill their inventory needs. The client agents trade products in quantities according to their needs and rely on seller agents if other clients in the retailer chain cannot provide the needed items. Additionally, it is expected that the trading between a client and the sellers is done through a reverted type of auction. This case study MAS uses BDI and FIPA-ACL in its implementation resulting in a clear simulation of the system. We expect to provide a comparison between two distinct situations. One with only external transactions with providers, and a situation where both internal and external transactions are allowed.
2304.12521
Keunwoo Choi Mr
Keunwoo Choi, Jaekwon Im, Laurie Heller, Brian McFee, Keisuke Imoto, Yuki Okamoto, Mathieu Lagrange, Shinosuke Takamichi
Foley Sound Synthesis at the DCASE 2023 Challenge
DCASE 2023 Challenge - Task 7 - Technical Report (Submitted to DCASE 2023 Workshop)
null
null
null
cs.SD eess.AS
http://creativecommons.org/licenses/by/4.0/
The addition of Foley sound effects during post-production is a common technique used to enhance the perceived acoustic properties of multimedia content. Traditionally, Foley sound has been produced by human Foley artists, which involves manual recording and mixing of sound. However, recent advances in sound synthesis and generative models have generated interest in machine-assisted or automatic Foley synthesis techniques. To promote further research in this area, we have organized a challenge in DCASE 2023: Task 7 - Foley Sound Synthesis. Our challenge aims to provide a standardized evaluation framework that is both rigorous and efficient, allowing for the evaluation of different Foley synthesis systems. We received 17 submissions, and performed both objective and subjective evaluation to rank them according to three criteria: audio quality, fit-to-category, and diversity. Through this challenge, we hope to encourage active participation from the research community and advance the state-of-the-art in automatic Foley synthesis. In this technical report, we provide a detailed overview of the Foley sound synthesis challenge, including task definition, dataset, baseline, evaluation scheme and criteria, challenge result, and discussion.
[ { "created": "Tue, 25 Apr 2023 02:28:32 GMT", "version": "v1" }, { "created": "Wed, 26 Apr 2023 03:25:11 GMT", "version": "v2" }, { "created": "Thu, 15 Jun 2023 04:35:03 GMT", "version": "v3" }, { "created": "Thu, 28 Sep 2023 18:38:21 GMT", "version": "v4" } ]
2023-10-02
[ [ "Choi", "Keunwoo", "" ], [ "Im", "Jaekwon", "" ], [ "Heller", "Laurie", "" ], [ "McFee", "Brian", "" ], [ "Imoto", "Keisuke", "" ], [ "Okamoto", "Yuki", "" ], [ "Lagrange", "Mathieu", "" ], [ "Takamichi", "Shinosuke", "" ] ]
The addition of Foley sound effects during post-production is a common technique used to enhance the perceived acoustic properties of multimedia content. Traditionally, Foley sound has been produced by human Foley artists, which involves manual recording and mixing of sound. However, recent advances in sound synthesis and generative models have generated interest in machine-assisted or automatic Foley synthesis techniques. To promote further research in this area, we have organized a challenge in DCASE 2023: Task 7 - Foley Sound Synthesis. Our challenge aims to provide a standardized evaluation framework that is both rigorous and efficient, allowing for the evaluation of different Foley synthesis systems. We received 17 submissions, and performed both objective and subjective evaluation to rank them according to three criteria: audio quality, fit-to-category, and diversity. Through this challenge, we hope to encourage active participation from the research community and advance the state-of-the-art in automatic Foley synthesis. In this technical report, we provide a detailed overview of the Foley sound synthesis challenge, including task definition, dataset, baseline, evaluation scheme and criteria, challenge result, and discussion.
2305.19497
Keisuke Shirai
Keisuke Shirai, Hirotaka Kameko, Shinsuke Mori
Towards Flow Graph Prediction of Open-Domain Procedural Texts
RepL4NLP 2023
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Machine comprehension of procedural texts is essential for reasoning about the steps and automating the procedures. However, this requires identifying entities within a text and resolving the relationships between the entities. Previous work focused on the cooking domain and proposed a framework to convert a recipe text into a flow graph (FG) representation. In this work, we propose a framework based on the recipe FG for flow graph prediction of open-domain procedural texts. To investigate flow graph prediction performance in non-cooking domains, we introduce the wikiHow-FG corpus from articles on wikiHow, a website of how-to instruction articles. In experiments, we consider using the existing recipe corpus and performing domain adaptation from the cooking to the target domain. Experimental results show that the domain adaptation models achieve higher performance than those trained only on the cooking or target domain data.
[ { "created": "Wed, 31 May 2023 02:15:15 GMT", "version": "v1" } ]
2023-06-01
[ [ "Shirai", "Keisuke", "" ], [ "Kameko", "Hirotaka", "" ], [ "Mori", "Shinsuke", "" ] ]
Machine comprehension of procedural texts is essential for reasoning about the steps and automating the procedures. However, this requires identifying entities within a text and resolving the relationships between the entities. Previous work focused on the cooking domain and proposed a framework to convert a recipe text into a flow graph (FG) representation. In this work, we propose a framework based on the recipe FG for flow graph prediction of open-domain procedural texts. To investigate flow graph prediction performance in non-cooking domains, we introduce the wikiHow-FG corpus from articles on wikiHow, a website of how-to instruction articles. In experiments, we consider using the existing recipe corpus and performing domain adaptation from the cooking to the target domain. Experimental results show that the domain adaptation models achieve higher performance than those trained only on the cooking or target domain data.
1204.2201
Chris Thachuk
Anne Condon, J\'an Ma\v{n}uch and Chris Thachuk
The complexity of string partitioning
14 pages main text + 13 pages appendix. Full version with proofs of an article appearing in the Proceedings of the 23rd Annual Symposium on Combinatorial Pattern Matching (CPM 2012), Helsinki, Finland, July 2012
null
null
null
cs.CC cs.FL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Given a string $w$ over a finite alphabet $\Sigma$ and an integer $K$, can $w$ be partitioned into strings of length at most $K$, such that there are no \emph{collisions}? We refer to this question as the \emph{string partition} problem and show it is \NP-complete for various definitions of collision and for a number of interesting restrictions including $|\Sigma|=2$. This establishes the hardness of an important problem in contemporary synthetic biology, namely, oligo design for gene synthesis.
[ { "created": "Tue, 10 Apr 2012 16:00:02 GMT", "version": "v1" } ]
2012-04-11
[ [ "Condon", "Anne", "" ], [ "Maňuch", "Ján", "" ], [ "Thachuk", "Chris", "" ] ]
Given a string $w$ over a finite alphabet $\Sigma$ and an integer $K$, can $w$ be partitioned into strings of length at most $K$, such that there are no \emph{collisions}? We refer to this question as the \emph{string partition} problem and show it is \NP-complete for various definitions of collision and for a number of interesting restrictions including $|\Sigma|=2$. This establishes the hardness of an important problem in contemporary synthetic biology, namely, oligo design for gene synthesis.
1812.00877
Glib Kechyn
Glib Kechyn
Automatic lesion boundary detection in dermoscopy
null
null
null
null
cs.CV cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This manuscript addresses the problem of the automatic lesion boundary detection in dermoscopy, using deep neural networks. An approach is based on the adaptation of the U-net convolutional neural network with skip connections for lesion boundary segmentation task. I hope this paper could serve, to some extent, as an experiment of using deep convolutional networks in biomedical segmentation task and as a guideline of the boundary detection benchmark, inspiring further attempts and researches.
[ { "created": "Fri, 23 Nov 2018 22:36:36 GMT", "version": "v1" } ]
2018-12-04
[ [ "Kechyn", "Glib", "" ] ]
This manuscript addresses the problem of the automatic lesion boundary detection in dermoscopy, using deep neural networks. An approach is based on the adaptation of the U-net convolutional neural network with skip connections for lesion boundary segmentation task. I hope this paper could serve, to some extent, as an experiment of using deep convolutional networks in biomedical segmentation task and as a guideline of the boundary detection benchmark, inspiring further attempts and researches.
2008.01323
Pengqian Yu
Xinhan Di, Pengqian Yu, Hong Zhu, Lei Cai, Qiuyan Sheng, Changyu Sun
Structural Plan of Indoor Scenes with Personalized Preferences
Accepted by the 8th International Workshop on Assistive Computer Vision and Robotics (ACVR) in Conjunction with ECCV 2020
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we propose an assistive model that supports professional interior designers to produce industrial interior decoration solutions and to meet the personalized preferences of the property owners. The proposed model is able to automatically produce the layout of objects of a particular indoor scene according to property owners' preferences. In particular, the model consists of the extraction of abstract graph, conditional graph generation, and conditional scene instantiation. We provide an interior layout dataset that contains real-world 11000 designs from professional designers. Our numerical results on the dataset demonstrate the effectiveness of the proposed model compared with the state-of-art methods.
[ { "created": "Tue, 4 Aug 2020 04:46:19 GMT", "version": "v1" }, { "created": "Wed, 5 Aug 2020 13:30:45 GMT", "version": "v2" } ]
2020-08-06
[ [ "Di", "Xinhan", "" ], [ "Yu", "Pengqian", "" ], [ "Zhu", "Hong", "" ], [ "Cai", "Lei", "" ], [ "Sheng", "Qiuyan", "" ], [ "Sun", "Changyu", "" ] ]
In this paper, we propose an assistive model that supports professional interior designers to produce industrial interior decoration solutions and to meet the personalized preferences of the property owners. The proposed model is able to automatically produce the layout of objects of a particular indoor scene according to property owners' preferences. In particular, the model consists of the extraction of abstract graph, conditional graph generation, and conditional scene instantiation. We provide an interior layout dataset that contains real-world 11000 designs from professional designers. Our numerical results on the dataset demonstrate the effectiveness of the proposed model compared with the state-of-art methods.
1304.0589
Meryem Kassou
Meryem Kassou and Laila Kjiri
A Goal Question Metric Approach for Evaluating Security in a Service Oriented Architecture Context
12 pages
IJCSI(International Journal of Computer Science Issues)Journal, Volume 9, Issue 4, No 1, July 2012
null
null
cs.SE cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
For interactions to be possible within the Service Oriented Architecture (SOA) ecosystem, each actor must be enough confident of other actors to engage safely in the interactions. Therefore, the establishing of objective metrics tailored to the context of SOA that show security of a system and lead to enhancements is very attractive. The purpose of our paper is to present a GQM (Goal Question Metric) approach based on Standard security metrics and on SOA maturity that can be a support for organizations to assess SOA Security and to ensure the safety of their SOA based collaborations
[ { "created": "Tue, 2 Apr 2013 11:22:22 GMT", "version": "v1" } ]
2013-04-03
[ [ "Kassou", "Meryem", "" ], [ "Kjiri", "Laila", "" ] ]
For interactions to be possible within the Service Oriented Architecture (SOA) ecosystem, each actor must be enough confident of other actors to engage safely in the interactions. Therefore, the establishing of objective metrics tailored to the context of SOA that show security of a system and lead to enhancements is very attractive. The purpose of our paper is to present a GQM (Goal Question Metric) approach based on Standard security metrics and on SOA maturity that can be a support for organizations to assess SOA Security and to ensure the safety of their SOA based collaborations
1604.06976
Mahyuddin K. M. Nasution
Mahyuddin K. M. Nasution
Extracted Social Network Mining
6 pages. Proceeding of International Conference on Information Technology and Engineering Application (5-th ICIBA), 86-91, February 19-20, 2016
null
null
null
cs.SI cs.AI
http://creativecommons.org/publicdomain/zero/1.0/
In this paper we study the relationship between the resources of social networks by exploring the Web as big data based on a simple search engine. We have used set theory by utilizing the occurrence and co-occurrence for defining the singleton or doubleton spaces of event in a search engine model, and then provided them as representation of social actors and their relationship in clusters. Thus, there are behaviors of social actors and their relation based on Web.
[ { "created": "Sun, 24 Apr 2016 02:39:16 GMT", "version": "v1" } ]
2016-04-26
[ [ "Nasution", "Mahyuddin K. M.", "" ] ]
In this paper we study the relationship between the resources of social networks by exploring the Web as big data based on a simple search engine. We have used set theory by utilizing the occurrence and co-occurrence for defining the singleton or doubleton spaces of event in a search engine model, and then provided them as representation of social actors and their relationship in clusters. Thus, there are behaviors of social actors and their relation based on Web.
2007.04842
Jim Mainprice
Jim Mainprice and Nathan Ratliff and Marc Toussaint and Stefan Schaal
An Interior Point Method Solving Motion Planning Problems with Narrow Passages
IEEE RO-MAN 2020, 6 pages
null
null
null
cs.RO cs.CG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Algorithmic solutions for the motion planning problem have been investigated for five decades. Since the development of A* in 1969 many approaches have been investigated, traditionally classified as either grid decomposition, potential fields or sampling-based. In this work, we focus on using numerical optimization, which is understudied for solving motion planning problems. This lack of interest in the favor of sampling-based methods is largely due to the non-convexity introduced by narrow passages. We address this shortcoming by grounding the solution in differential geometry. We demonstrate through a series of experiments on 3 Dofs and 6 Dofs narrow passage problems, how modeling explicitly the underlying Riemannian manifold leads to an efficient interior-point non-linear programming solution.
[ { "created": "Thu, 9 Jul 2020 14:44:19 GMT", "version": "v1" }, { "created": "Fri, 24 Jul 2020 15:49:53 GMT", "version": "v2" } ]
2020-07-27
[ [ "Mainprice", "Jim", "" ], [ "Ratliff", "Nathan", "" ], [ "Toussaint", "Marc", "" ], [ "Schaal", "Stefan", "" ] ]
Algorithmic solutions for the motion planning problem have been investigated for five decades. Since the development of A* in 1969 many approaches have been investigated, traditionally classified as either grid decomposition, potential fields or sampling-based. In this work, we focus on using numerical optimization, which is understudied for solving motion planning problems. This lack of interest in the favor of sampling-based methods is largely due to the non-convexity introduced by narrow passages. We address this shortcoming by grounding the solution in differential geometry. We demonstrate through a series of experiments on 3 Dofs and 6 Dofs narrow passage problems, how modeling explicitly the underlying Riemannian manifold leads to an efficient interior-point non-linear programming solution.
2406.08160
Shoujie Li
Shoujie Li, Yan Huang, Changqing Guo, Tong Wu, Jiawei Zhang, Linrui Zhang, Wenbo Ding
Chemistry3D: Robotic Interaction Benchmark for Chemistry Experiments
null
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The advent of simulation engines has revolutionized learning and operational efficiency for robots, offering cost-effective and swift pipelines. However, the lack of a universal simulation platform tailored for chemical scenarios impedes progress in robotic manipulation and visualization of reaction processes. Addressing this void, we present Chemistry3D, an innovative toolkit that integrates extensive chemical and robotic knowledge. Chemistry3D not only enables robots to perform chemical experiments but also provides real-time visualization of temperature, color, and pH changes during reactions. Built on the NVIDIA Omniverse platform, Chemistry3D offers interfaces for robot operation, visual inspection, and liquid flow control, facilitating the simulation of special objects such as liquids and transparent entities. Leveraging this toolkit, we have devised RL tasks, object detection, and robot operation scenarios. Additionally, to discern disparities between the rendering engine and the real world, we conducted transparent object detection experiments using Sim2Real, validating the toolkit's exceptional simulation performance. The source code is available at https://github.com/huangyan28/Chemistry3D, and a related tutorial can be found at https://www.omni-chemistry.com.
[ { "created": "Wed, 12 Jun 2024 12:51:20 GMT", "version": "v1" } ]
2024-06-13
[ [ "Li", "Shoujie", "" ], [ "Huang", "Yan", "" ], [ "Guo", "Changqing", "" ], [ "Wu", "Tong", "" ], [ "Zhang", "Jiawei", "" ], [ "Zhang", "Linrui", "" ], [ "Ding", "Wenbo", "" ] ]
The advent of simulation engines has revolutionized learning and operational efficiency for robots, offering cost-effective and swift pipelines. However, the lack of a universal simulation platform tailored for chemical scenarios impedes progress in robotic manipulation and visualization of reaction processes. Addressing this void, we present Chemistry3D, an innovative toolkit that integrates extensive chemical and robotic knowledge. Chemistry3D not only enables robots to perform chemical experiments but also provides real-time visualization of temperature, color, and pH changes during reactions. Built on the NVIDIA Omniverse platform, Chemistry3D offers interfaces for robot operation, visual inspection, and liquid flow control, facilitating the simulation of special objects such as liquids and transparent entities. Leveraging this toolkit, we have devised RL tasks, object detection, and robot operation scenarios. Additionally, to discern disparities between the rendering engine and the real world, we conducted transparent object detection experiments using Sim2Real, validating the toolkit's exceptional simulation performance. The source code is available at https://github.com/huangyan28/Chemistry3D, and a related tutorial can be found at https://www.omni-chemistry.com.
2303.03633
Kazuma Kobayashi
Kazuma Kobayashi, Lin Gu, Ryuichiro Hataya, Takaaki Mizuno, Mototaka Miyake, Hirokazu Watanabe, Masamichi Takahashi, Yasuyuki Takamizawa, Yukihiro Yoshida, Satoshi Nakamura, Nobuji Kouno, Amina Bolatkan, Yusuke Kurose, Tatsuya Harada, Ryuji Hamamoto
Sketch-based Medical Image Retrieval
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The amount of medical images stored in hospitals is increasing faster than ever; however, utilizing the accumulated medical images has been limited. This is because existing content-based medical image retrieval (CBMIR) systems usually require example images to construct query vectors; nevertheless, example images cannot always be prepared. Besides, there can be images with rare characteristics that make it difficult to find similar example images, which we call isolated samples. Here, we introduce a novel sketch-based medical image retrieval (SBMIR) system that enables users to find images of interest without example images. The key idea lies in feature decomposition of medical images, whereby the entire feature of a medical image can be decomposed into and reconstructed from normal and abnormal features. By extending this idea, our SBMIR system provides an easy-to-use two-step graphical user interface: users first select a template image to specify a normal feature and then draw a semantic sketch of the disease on the template image to represent an abnormal feature. Subsequently, it integrates the two kinds of input to construct a query vector and retrieves reference images with the closest reference vectors. Using two datasets, ten healthcare professionals with various clinical backgrounds participated in the user test for evaluation. As a result, our SBMIR system enabled users to overcome previous challenges, including image retrieval based on fine-grained image characteristics, image retrieval without example images, and image retrieval for isolated samples. Our SBMIR system achieves flexible medical image retrieval on demand, thereby expanding the utility of medical image databases.
[ { "created": "Tue, 7 Mar 2023 03:41:13 GMT", "version": "v1" } ]
2023-03-08
[ [ "Kobayashi", "Kazuma", "" ], [ "Gu", "Lin", "" ], [ "Hataya", "Ryuichiro", "" ], [ "Mizuno", "Takaaki", "" ], [ "Miyake", "Mototaka", "" ], [ "Watanabe", "Hirokazu", "" ], [ "Takahashi", "Masamichi", "" ], [ "Takamizawa", "Yasuyuki", "" ], [ "Yoshida", "Yukihiro", "" ], [ "Nakamura", "Satoshi", "" ], [ "Kouno", "Nobuji", "" ], [ "Bolatkan", "Amina", "" ], [ "Kurose", "Yusuke", "" ], [ "Harada", "Tatsuya", "" ], [ "Hamamoto", "Ryuji", "" ] ]
The amount of medical images stored in hospitals is increasing faster than ever; however, utilizing the accumulated medical images has been limited. This is because existing content-based medical image retrieval (CBMIR) systems usually require example images to construct query vectors; nevertheless, example images cannot always be prepared. Besides, there can be images with rare characteristics that make it difficult to find similar example images, which we call isolated samples. Here, we introduce a novel sketch-based medical image retrieval (SBMIR) system that enables users to find images of interest without example images. The key idea lies in feature decomposition of medical images, whereby the entire feature of a medical image can be decomposed into and reconstructed from normal and abnormal features. By extending this idea, our SBMIR system provides an easy-to-use two-step graphical user interface: users first select a template image to specify a normal feature and then draw a semantic sketch of the disease on the template image to represent an abnormal feature. Subsequently, it integrates the two kinds of input to construct a query vector and retrieves reference images with the closest reference vectors. Using two datasets, ten healthcare professionals with various clinical backgrounds participated in the user test for evaluation. As a result, our SBMIR system enabled users to overcome previous challenges, including image retrieval based on fine-grained image characteristics, image retrieval without example images, and image retrieval for isolated samples. Our SBMIR system achieves flexible medical image retrieval on demand, thereby expanding the utility of medical image databases.
2205.06002
Simon St{\aa}hlberg
Simon St{\aa}hlberg, Blai Bonet, Hector Geffner
Learning Generalized Policies Without Supervision Using GNNs
Proceedings of the 19th International Conference on Principles of Knowledge Representation and Reasoning (KR-22)
null
null
null
cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
We consider the problem of learning generalized policies for classical planning domains using graph neural networks from small instances represented in lifted STRIPS. The problem has been considered before but the proposed neural architectures are complex and the results are often mixed. In this work, we use a simple and general GNN architecture and aim at obtaining crisp experimental results and a deeper understanding: either the policy greedy in the learned value function achieves close to 100% generalization over instances larger than those used in training, or the failure must be understood, and possibly fixed, logically. For this, we exploit the relation established between the expressive power of GNNs and the $C_{2}$ fragment of first-order logic (namely, FOL with 2 variables and counting quantifiers). We find for example that domains with general policies that require more expressive features can be solved with GNNs once the states are extended with suitable "derived atoms" encoding role compositions and transitive closures that do not fit into $C_{2}$. The work follows the GNN approach for learning optimal general policies in a supervised fashion (Stahlberg, Bonet, Geffner, 2022); but the learned policies are no longer required to be optimal (which expands the scope, as many planning domains do not have general optimal policies) and are learned without supervision. Interestingly, value-based reinforcement learning methods that aim to produce optimal policies, do not always yield policies that generalize, as the goals of optimality and generality are in conflict in domains where optimal planning is NP-hard.
[ { "created": "Thu, 12 May 2022 10:28:46 GMT", "version": "v1" } ]
2022-05-13
[ [ "Ståhlberg", "Simon", "" ], [ "Bonet", "Blai", "" ], [ "Geffner", "Hector", "" ] ]
We consider the problem of learning generalized policies for classical planning domains using graph neural networks from small instances represented in lifted STRIPS. The problem has been considered before but the proposed neural architectures are complex and the results are often mixed. In this work, we use a simple and general GNN architecture and aim at obtaining crisp experimental results and a deeper understanding: either the policy greedy in the learned value function achieves close to 100% generalization over instances larger than those used in training, or the failure must be understood, and possibly fixed, logically. For this, we exploit the relation established between the expressive power of GNNs and the $C_{2}$ fragment of first-order logic (namely, FOL with 2 variables and counting quantifiers). We find for example that domains with general policies that require more expressive features can be solved with GNNs once the states are extended with suitable "derived atoms" encoding role compositions and transitive closures that do not fit into $C_{2}$. The work follows the GNN approach for learning optimal general policies in a supervised fashion (Stahlberg, Bonet, Geffner, 2022); but the learned policies are no longer required to be optimal (which expands the scope, as many planning domains do not have general optimal policies) and are learned without supervision. Interestingly, value-based reinforcement learning methods that aim to produce optimal policies, do not always yield policies that generalize, as the goals of optimality and generality are in conflict in domains where optimal planning is NP-hard.
2010.02686
Aina Gar\'i Soler
Aina Gar\'i Soler, Marianna Apidianaki
BERT Knows Punta Cana is not just beautiful, it's gorgeous: Ranking Scalar Adjectives with Contextualised Representations
EMNLP 2020
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Adjectives like pretty, beautiful and gorgeous describe positive properties of the nouns they modify but with different intensity. These differences are important for natural language understanding and reasoning. We propose a novel BERT-based approach to intensity detection for scalar adjectives. We model intensity by vectors directly derived from contextualised representations and show they can successfully rank scalar adjectives. We evaluate our models both intrinsically, on gold standard datasets, and on an Indirect Question Answering task. Our results demonstrate that BERT encodes rich knowledge about the semantics of scalar adjectives, and is able to provide better quality intensity rankings than static embeddings and previous models with access to dedicated resources.
[ { "created": "Tue, 6 Oct 2020 13:05:47 GMT", "version": "v1" } ]
2020-10-07
[ [ "Soler", "Aina Garí", "" ], [ "Apidianaki", "Marianna", "" ] ]
Adjectives like pretty, beautiful and gorgeous describe positive properties of the nouns they modify but with different intensity. These differences are important for natural language understanding and reasoning. We propose a novel BERT-based approach to intensity detection for scalar adjectives. We model intensity by vectors directly derived from contextualised representations and show they can successfully rank scalar adjectives. We evaluate our models both intrinsically, on gold standard datasets, and on an Indirect Question Answering task. Our results demonstrate that BERT encodes rich knowledge about the semantics of scalar adjectives, and is able to provide better quality intensity rankings than static embeddings and previous models with access to dedicated resources.
2005.02605
Hamed Nemati
Hamed Nemati
Secure System Virtualization: End-to-End Verification of Memory Isolation
null
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Over the last years, security kernels have played a promising role in reshaping the landscape of platform security on today's ubiquitous embedded devices. Security kernels, such as separation kernels, enable constructing high-assurance mixed-criticality execution platforms. They reduce the software portion of the system's trusted computing base to a thin layer, which enforces isolation between low- and high-criticality components. The reduced trusted computing base minimizes the system attack surface and facilitates the use of formal methods to ensure functional correctness and security of the kernel. In this thesis, we explore various aspects of building a provably secure separation kernel using virtualization technology. In particular, we examine techniques related to the appropriate management of the memory subsystem. Once these techniques were implemented and functionally verified, they provide reliable a foundation for application scenarios that require strong guarantees of isolation and facilitate formal reasoning about the system's overall security.
[ { "created": "Wed, 6 May 2020 06:03:04 GMT", "version": "v1" } ]
2020-05-07
[ [ "Nemati", "Hamed", "" ] ]
Over the last years, security kernels have played a promising role in reshaping the landscape of platform security on today's ubiquitous embedded devices. Security kernels, such as separation kernels, enable constructing high-assurance mixed-criticality execution platforms. They reduce the software portion of the system's trusted computing base to a thin layer, which enforces isolation between low- and high-criticality components. The reduced trusted computing base minimizes the system attack surface and facilitates the use of formal methods to ensure functional correctness and security of the kernel. In this thesis, we explore various aspects of building a provably secure separation kernel using virtualization technology. In particular, we examine techniques related to the appropriate management of the memory subsystem. Once these techniques were implemented and functionally verified, they provide reliable a foundation for application scenarios that require strong guarantees of isolation and facilitate formal reasoning about the system's overall security.
2009.05257
Charlene Yang
Charlene Yang, Yunsong Wang, Steven Farrell, Thorsten Kurth, Samuel Williams
Hierarchical Roofline Performance Analysis for Deep Learning Applications
9 pages
null
null
null
cs.DC cs.LG cs.PF
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents a practical methodology for collecting performance data necessary to conduct hierarchical Roofline analysis on NVIDIA GPUs. It discusses the extension of the Empirical Roofline Toolkit for broader support of a range of data precisions and Tensor Core support and introduces a Nsight Compute based method to accurately collect application performance information. This methodology allows for automated machine characterization and application characterization for Roofline analysis across the entire memory hierarchy on NVIDIA GPUs, and it is validated by a complex deep learning application used for climate image segmentation. We use two versions of the code, in TensorFlow and PyTorch respectively, to demonstrate the use and effectiveness of this methodology. We highlight how the application utilizes the compute and memory capabilities on the GPU and how the implementation and performance differ in two deep learning frameworks.
[ { "created": "Fri, 11 Sep 2020 07:16:55 GMT", "version": "v1" }, { "created": "Wed, 16 Sep 2020 07:14:16 GMT", "version": "v2" }, { "created": "Tue, 22 Sep 2020 21:57:58 GMT", "version": "v3" }, { "created": "Wed, 25 Nov 2020 02:52:41 GMT", "version": "v4" } ]
2020-11-26
[ [ "Yang", "Charlene", "" ], [ "Wang", "Yunsong", "" ], [ "Farrell", "Steven", "" ], [ "Kurth", "Thorsten", "" ], [ "Williams", "Samuel", "" ] ]
This paper presents a practical methodology for collecting performance data necessary to conduct hierarchical Roofline analysis on NVIDIA GPUs. It discusses the extension of the Empirical Roofline Toolkit for broader support of a range of data precisions and Tensor Core support and introduces a Nsight Compute based method to accurately collect application performance information. This methodology allows for automated machine characterization and application characterization for Roofline analysis across the entire memory hierarchy on NVIDIA GPUs, and it is validated by a complex deep learning application used for climate image segmentation. We use two versions of the code, in TensorFlow and PyTorch respectively, to demonstrate the use and effectiveness of this methodology. We highlight how the application utilizes the compute and memory capabilities on the GPU and how the implementation and performance differ in two deep learning frameworks.
2205.10226
Oliver Eberle
Stephanie Brandl, Oliver Eberle, Jonas Pilot, Anders S{\o}gaard
Do Transformer Models Show Similar Attention Patterns to Task-Specific Human Gaze?
Accepted to ACL 2022
null
null
null
cs.CL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Learned self-attention functions in state-of-the-art NLP models often correlate with human attention. We investigate whether self-attention in large-scale pre-trained language models is as predictive of human eye fixation patterns during task-reading as classical cognitive models of human attention. We compare attention functions across two task-specific reading datasets for sentiment analysis and relation extraction. We find the predictiveness of large-scale pre-trained self-attention for human attention depends on `what is in the tail', e.g., the syntactic nature of rare contexts. Further, we observe that task-specific fine-tuning does not increase the correlation with human task-specific reading. Through an input reduction experiment we give complementary insights on the sparsity and fidelity trade-off, showing that lower-entropy attention vectors are more faithful.
[ { "created": "Mon, 25 Apr 2022 08:23:13 GMT", "version": "v1" } ]
2022-05-23
[ [ "Brandl", "Stephanie", "" ], [ "Eberle", "Oliver", "" ], [ "Pilot", "Jonas", "" ], [ "Søgaard", "Anders", "" ] ]
Learned self-attention functions in state-of-the-art NLP models often correlate with human attention. We investigate whether self-attention in large-scale pre-trained language models is as predictive of human eye fixation patterns during task-reading as classical cognitive models of human attention. We compare attention functions across two task-specific reading datasets for sentiment analysis and relation extraction. We find the predictiveness of large-scale pre-trained self-attention for human attention depends on `what is in the tail', e.g., the syntactic nature of rare contexts. Further, we observe that task-specific fine-tuning does not increase the correlation with human task-specific reading. Through an input reduction experiment we give complementary insights on the sparsity and fidelity trade-off, showing that lower-entropy attention vectors are more faithful.
1801.05574
Ying Lu
Ying Lu, Liming Chen, Alexandre Saidi, Xianfeng Gu
Brenier approach for optimal transportation between a quasi-discrete measure and a discrete measure
null
null
null
null
cs.CV cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Correctly estimating the discrepancy between two data distributions has always been an important task in Machine Learning. Recently, Cuturi proposed the Sinkhorn distance which makes use of an approximate Optimal Transport cost between two distributions as a distance to describe distribution discrepancy. Although it has been successfully adopted in various machine learning applications (e.g. in Natural Language Processing and Computer Vision) since then, the Sinkhorn distance also suffers from two unnegligible limitations. The first one is that the Sinkhorn distance only gives an approximation of the real Wasserstein distance, the second one is the `divide by zero' problem which often occurs during matrix scaling when setting the entropy regularization coefficient to a small value. In this paper, we introduce a new Brenier approach for calculating a more accurate Wasserstein distance between two discrete distributions, this approach successfully avoids the two limitations shown above for Sinkhorn distance and gives an alternative way for estimating distribution discrepancy.
[ { "created": "Wed, 17 Jan 2018 07:06:21 GMT", "version": "v1" } ]
2018-01-18
[ [ "Lu", "Ying", "" ], [ "Chen", "Liming", "" ], [ "Saidi", "Alexandre", "" ], [ "Gu", "Xianfeng", "" ] ]
Correctly estimating the discrepancy between two data distributions has always been an important task in Machine Learning. Recently, Cuturi proposed the Sinkhorn distance which makes use of an approximate Optimal Transport cost between two distributions as a distance to describe distribution discrepancy. Although it has been successfully adopted in various machine learning applications (e.g. in Natural Language Processing and Computer Vision) since then, the Sinkhorn distance also suffers from two unnegligible limitations. The first one is that the Sinkhorn distance only gives an approximation of the real Wasserstein distance, the second one is the `divide by zero' problem which often occurs during matrix scaling when setting the entropy regularization coefficient to a small value. In this paper, we introduce a new Brenier approach for calculating a more accurate Wasserstein distance between two discrete distributions, this approach successfully avoids the two limitations shown above for Sinkhorn distance and gives an alternative way for estimating distribution discrepancy.
2204.07454
Nikolai Kudasov
Nikolai Kudasov and Violetta Sim
Formalizing $\varphi$-calculus: a purely object-oriented calculus of decorated objects
null
null
null
null
cs.PL cs.LO
http://creativecommons.org/licenses/by/4.0/
Many calculi exist for modelling various features of object-oriented languages. Many of them are based on $\lambda$-calculus and focus either on statically typed class-based languages or dynamic prototype-based languages. We formalize untyped calculus of decorated objects, informally presented by Bugayenko, which is defined in terms of objects and relies on decoration as a primary mechanism of object extension. It is not based on $\lambda$-calculus, yet with only four basic syntactic constructions is just as complete. We prove the calculus is confluent (i.e. possesses Church-Rosser property), and introduce an abstract machine for call-by-name evaluation. Finally, we provide a sound translation to $\lambda$-calculus with records.
[ { "created": "Fri, 15 Apr 2022 13:25:01 GMT", "version": "v1" }, { "created": "Fri, 2 Dec 2022 10:55:15 GMT", "version": "v2" } ]
2022-12-05
[ [ "Kudasov", "Nikolai", "" ], [ "Sim", "Violetta", "" ] ]
Many calculi exist for modelling various features of object-oriented languages. Many of them are based on $\lambda$-calculus and focus either on statically typed class-based languages or dynamic prototype-based languages. We formalize untyped calculus of decorated objects, informally presented by Bugayenko, which is defined in terms of objects and relies on decoration as a primary mechanism of object extension. It is not based on $\lambda$-calculus, yet with only four basic syntactic constructions is just as complete. We prove the calculus is confluent (i.e. possesses Church-Rosser property), and introduce an abstract machine for call-by-name evaluation. Finally, we provide a sound translation to $\lambda$-calculus with records.
2407.05293
Xiaoling Hu
Xiaowei Qian, Xiaoling Hu, Chenxi Liu, Mugen Peng
Wideband Beamforming with RIS: A Unified Framework via Space-Frequency Transformation
13 pages, 16 figures
null
null
null
cs.CE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The spectrum shift from the sub-6G band to the high-frequency band has posed an ever-increasing demand on the paradigm shift from narrowband beamforming to wideband beamforming. Despite recent research efforts, the problem of wideband beamforming design is particularly challenging in reconfigurable intelligent surface (RIS)-assisted systems, due to that RIS is not capable of performing frequency-dependent phase shift, therefore inducing high signal processing complexity. In this paper, we propose a simple-yet-efficient wideband beamforming design for RIS-assisted systems, in which a transmitter sends wideband signals to a desired target, through the aid of the RIS. In our proposed design, we exploit space-frequency Fourier transformation and stationary phase method to yield an approximate closed-form solution of the RIS phase shifts which significantly reduces the signal processing complexity, compared to the existing approaches. The obtained solution is then used to generate a large and flat beampattern over the desired frequency band. Through numerical results, we validate the effectiveness of our proposed beamforming design and demonstrate how it can improve system performances in terms of communication rate and sensing resolution. Beyond generating the flat beampattern, we highlight that our proposed design is capable of mimicking any desired beampattern by matching the RIS phase shift with the amplitude modulation function, thus providing valuable insights into the design of novel wideband beamforming for RIS-assisted systems.
[ { "created": "Sun, 7 Jul 2024 07:23:22 GMT", "version": "v1" } ]
2024-07-09
[ [ "Qian", "Xiaowei", "" ], [ "Hu", "Xiaoling", "" ], [ "Liu", "Chenxi", "" ], [ "Peng", "Mugen", "" ] ]
The spectrum shift from the sub-6G band to the high-frequency band has posed an ever-increasing demand on the paradigm shift from narrowband beamforming to wideband beamforming. Despite recent research efforts, the problem of wideband beamforming design is particularly challenging in reconfigurable intelligent surface (RIS)-assisted systems, due to that RIS is not capable of performing frequency-dependent phase shift, therefore inducing high signal processing complexity. In this paper, we propose a simple-yet-efficient wideband beamforming design for RIS-assisted systems, in which a transmitter sends wideband signals to a desired target, through the aid of the RIS. In our proposed design, we exploit space-frequency Fourier transformation and stationary phase method to yield an approximate closed-form solution of the RIS phase shifts which significantly reduces the signal processing complexity, compared to the existing approaches. The obtained solution is then used to generate a large and flat beampattern over the desired frequency band. Through numerical results, we validate the effectiveness of our proposed beamforming design and demonstrate how it can improve system performances in terms of communication rate and sensing resolution. Beyond generating the flat beampattern, we highlight that our proposed design is capable of mimicking any desired beampattern by matching the RIS phase shift with the amplitude modulation function, thus providing valuable insights into the design of novel wideband beamforming for RIS-assisted systems.
2002.04397
Yuxiang Ren
Yuxiang Ren, Jiawei Zhang
Fake News Detection on News-Oriented Heterogeneous Information Networks through Hierarchical Graph Attention
null
null
null
null
cs.SI cs.CL cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The viral spread of fake news has caused great social harm, making fake news detection an urgent task. Current fake news detection methods rely heavily on text information by learning the extracted news content or writing style of internal knowledge. However, deliberate rumors can mask writing style, bypassing language models and invalidating simple text-based models. In fact, news articles and other related components (such as news creators and news topics) can be modeled as a heterogeneous information network (HIN for short). In this paper, we propose a novel fake news detection framework, namely Hierarchical Graph Attention Network(HGAT), which uses a novel hierarchical attention mechanism to perform node representation learning in HIN, and then detects fake news by classifying news article nodes. Experiments on two real-world fake news datasets show that HGAT can outperform text-based models and other network-based models. In addition, the experiment proved the expandability and generalizability of our for graph representation learning and other node classification related applications in heterogeneous graphs.
[ { "created": "Wed, 5 Feb 2020 19:09:13 GMT", "version": "v1" }, { "created": "Sat, 13 Feb 2021 03:16:22 GMT", "version": "v2" } ]
2021-02-16
[ [ "Ren", "Yuxiang", "" ], [ "Zhang", "Jiawei", "" ] ]
The viral spread of fake news has caused great social harm, making fake news detection an urgent task. Current fake news detection methods rely heavily on text information by learning the extracted news content or writing style of internal knowledge. However, deliberate rumors can mask writing style, bypassing language models and invalidating simple text-based models. In fact, news articles and other related components (such as news creators and news topics) can be modeled as a heterogeneous information network (HIN for short). In this paper, we propose a novel fake news detection framework, namely Hierarchical Graph Attention Network(HGAT), which uses a novel hierarchical attention mechanism to perform node representation learning in HIN, and then detects fake news by classifying news article nodes. Experiments on two real-world fake news datasets show that HGAT can outperform text-based models and other network-based models. In addition, the experiment proved the expandability and generalizability of our for graph representation learning and other node classification related applications in heterogeneous graphs.
2404.17017
Jeremy Harper
Jeremy Harper
AutoGenesisAgent: Self-Generating Multi-Agent Systems for Complex Tasks
null
null
null
null
cs.MA
http://creativecommons.org/licenses/by-nc-nd/4.0/
The proliferation of large language models (LLMs) and their integration into multi-agent systems has paved the way for sophisticated automation in various domains. This paper introduces AutoGenesisAgent, a multi-agent system that autonomously designs and deploys other multi-agent systems tailored for specific tasks. AutoGenesisAgent comprises several specialized agents including System Understanding, System Design, Agent Generator, and several others that collectively manage the lifecycle of creating functional multi-agent systems from initial concept to deployment. Each agent in AutoGenesisAgent has distinct responsibilities ranging from interpreting input prompts to optimizing system performance, culminating, in the deployment of a ready-to-use system. This proof-of-concept study discusses the design, implementation, and lessons learned from developing AutoGenesisAgent, highlighting its capability to generate and refine multi-agent systems autonomously, thereby reducing the need for extensive human oversight in the initial stages of system design. Keywords: multi-agent systems, large language models, system design automation, agent architecture, autonomous systems, software deployment
[ { "created": "Thu, 25 Apr 2024 20:20:51 GMT", "version": "v1" } ]
2024-04-29
[ [ "Harper", "Jeremy", "" ] ]
The proliferation of large language models (LLMs) and their integration into multi-agent systems has paved the way for sophisticated automation in various domains. This paper introduces AutoGenesisAgent, a multi-agent system that autonomously designs and deploys other multi-agent systems tailored for specific tasks. AutoGenesisAgent comprises several specialized agents including System Understanding, System Design, Agent Generator, and several others that collectively manage the lifecycle of creating functional multi-agent systems from initial concept to deployment. Each agent in AutoGenesisAgent has distinct responsibilities ranging from interpreting input prompts to optimizing system performance, culminating, in the deployment of a ready-to-use system. This proof-of-concept study discusses the design, implementation, and lessons learned from developing AutoGenesisAgent, highlighting its capability to generate and refine multi-agent systems autonomously, thereby reducing the need for extensive human oversight in the initial stages of system design. Keywords: multi-agent systems, large language models, system design automation, agent architecture, autonomous systems, software deployment
2012.02453
Samhita Varambally
B. Samhita Varambally, Naman Sehgal
Optimising Design Verification Using Machine Learning: An Open Source Solution
null
null
null
null
cs.LG cs.AR
http://creativecommons.org/licenses/by-nc-sa/4.0/
With the complexity of Integrated Circuits increasing, design verification has become the most time consuming part of the ASIC design flow. Nearly 70% of the SoC design cycle is consumed by verification. The most commonly used approach to test all corner cases is through the use of Constrained Random Verification. Random stimulus is given in order to hit all possible combinations and test the design thoroughly. However, this approach often requires significant human expertise to reach all corner cases. This paper presents an alternative using Machine Learning to generate the input stimulus. This will allow for faster thorough verification of the design with less human intervention. Furthermore, it is proposed to use the open source verification environment 'Cocotb'. Based on Python, it is simple, intuitive and has a vast library of functions for machine learning applications. This makes it more convenient to use than the bulkier approach using traditional Hardware Verification Languages such as System Verilog or Specman E.
[ { "created": "Fri, 4 Dec 2020 08:18:05 GMT", "version": "v1" } ]
2020-12-07
[ [ "Varambally", "B. Samhita", "" ], [ "Sehgal", "Naman", "" ] ]
With the complexity of Integrated Circuits increasing, design verification has become the most time consuming part of the ASIC design flow. Nearly 70% of the SoC design cycle is consumed by verification. The most commonly used approach to test all corner cases is through the use of Constrained Random Verification. Random stimulus is given in order to hit all possible combinations and test the design thoroughly. However, this approach often requires significant human expertise to reach all corner cases. This paper presents an alternative using Machine Learning to generate the input stimulus. This will allow for faster thorough verification of the design with less human intervention. Furthermore, it is proposed to use the open source verification environment 'Cocotb'. Based on Python, it is simple, intuitive and has a vast library of functions for machine learning applications. This makes it more convenient to use than the bulkier approach using traditional Hardware Verification Languages such as System Verilog or Specman E.
2202.00537
Yuan Wu
Yuan Wu, Diana Inkpen, Ahmed El-Roby
Maximum Batch Frobenius Norm for Multi-Domain Text Classification
5 pages, ICASSP 2022
null
null
null
cs.CL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multi-domain text classification (MDTC) has obtained remarkable achievements due to the advent of deep learning. Recently, many endeavors are devoted to applying adversarial learning to extract domain-invariant features to yield state-of-the-art results. However, these methods still face one challenge: transforming original features to be domain-invariant distorts the distributions of the original features, degrading the discriminability of the learned features. To address this issue, we first investigate the structure of the batch classification output matrix and theoretically justify that the discriminability of the learned features has a positive correlation with the Frobenius norm of the batch output matrix. Based on this finding, we propose a maximum batch Frobenius norm (MBF) method to boost the feature discriminability for MDTC. Experiments on two MDTC benchmarks show that our MBF approach can effectively advance the performance of the state-of-the-art.
[ { "created": "Sat, 29 Jan 2022 14:37:56 GMT", "version": "v1" } ]
2022-02-02
[ [ "Wu", "Yuan", "" ], [ "Inkpen", "Diana", "" ], [ "El-Roby", "Ahmed", "" ] ]
Multi-domain text classification (MDTC) has obtained remarkable achievements due to the advent of deep learning. Recently, many endeavors are devoted to applying adversarial learning to extract domain-invariant features to yield state-of-the-art results. However, these methods still face one challenge: transforming original features to be domain-invariant distorts the distributions of the original features, degrading the discriminability of the learned features. To address this issue, we first investigate the structure of the batch classification output matrix and theoretically justify that the discriminability of the learned features has a positive correlation with the Frobenius norm of the batch output matrix. Based on this finding, we propose a maximum batch Frobenius norm (MBF) method to boost the feature discriminability for MDTC. Experiments on two MDTC benchmarks show that our MBF approach can effectively advance the performance of the state-of-the-art.
1508.05488
Gang Mei
Gang Mei
CudaChain: A Practical GPU-accelerated 2D Convex Hull Algorithm
null
SpringerPlus 2016:2284
10.1186/s40064-016-2284-4
null
cs.CG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents a practical GPU-accelerated convex hull algorithm and a novel Sorting-based Preprocessing Approach (SPA) for planar point sets. The proposed algorithm consists of two stages: (1) two rounds of preprocessing performed on the GPU and (2) the finalization of calculating the expected convex hull on the CPU. We first discard the interior points that locate inside a quadrilateral formed by four extreme points, and then distribute the remaining points into several (typically four) sub regions. For each subset of points, we first sort them in parallel, then perform the second round of discarding using SPA, and finally form a simple chain for the current remaining points. A simple polygon can be easily generated by directly connecting all the chains in sub regions. We at last obtain the expected convex hull of the input points by calculating the convex hull of the simple polygon. We use the library Thrust to realize the parallel sorting, reduction, and partitioning for better efficiency and simplicity. Experimental results show that our algorithm achieves 5x ~ 6x speedups over the Qhull implementation for 20M points. Thus, this algorithm is competitive in practical applications for its simplicity and satisfied efficiency.
[ { "created": "Sat, 22 Aug 2015 09:32:11 GMT", "version": "v1" } ]
2016-05-24
[ [ "Mei", "Gang", "" ] ]
This paper presents a practical GPU-accelerated convex hull algorithm and a novel Sorting-based Preprocessing Approach (SPA) for planar point sets. The proposed algorithm consists of two stages: (1) two rounds of preprocessing performed on the GPU and (2) the finalization of calculating the expected convex hull on the CPU. We first discard the interior points that locate inside a quadrilateral formed by four extreme points, and then distribute the remaining points into several (typically four) sub regions. For each subset of points, we first sort them in parallel, then perform the second round of discarding using SPA, and finally form a simple chain for the current remaining points. A simple polygon can be easily generated by directly connecting all the chains in sub regions. We at last obtain the expected convex hull of the input points by calculating the convex hull of the simple polygon. We use the library Thrust to realize the parallel sorting, reduction, and partitioning for better efficiency and simplicity. Experimental results show that our algorithm achieves 5x ~ 6x speedups over the Qhull implementation for 20M points. Thus, this algorithm is competitive in practical applications for its simplicity and satisfied efficiency.
2210.07453
Jonathan Pilault
Jonathan Pilault, Michael Galkin, Bahare Fatemi, Perouz Taslakian, David Vasquez, Christopher Pal
Using Graph Algorithms to Pretrain Graph Completion Transformers
null
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
Recent work on Graph Neural Networks has demonstrated that self-supervised pretraining can further enhance performance on downstream graph, link, and node classification tasks. However, the efficacy of pretraining tasks has not been fully investigated for downstream large knowledge graph completion tasks. Using a contextualized knowledge graph embedding approach, we investigate five different pretraining signals, constructed using several graph algorithms and no external data, as well as their combination. We leverage the versatility of our Transformer-based model to explore graph structure generation pretraining tasks (i.e. path and k-hop neighborhood generation), typically inapplicable to most graph embedding methods. We further propose a new path-finding algorithm guided by information gain and find that it is the best-performing pretraining task across three downstream knowledge graph completion datasets. While using our new path-finding algorithm as a pretraining signal provides 2-3% MRR improvements, we show that pretraining on all signals together gives the best knowledge graph completion results. In a multitask setting that combines all pretraining tasks, our method surpasses the latest and strong performing knowledge graph embedding methods on all metrics for FB15K-237, on MRR and Hit@1 for WN18RRand on MRR and hit@10 for JF17K (a knowledge hypergraph dataset).
[ { "created": "Fri, 14 Oct 2022 01:41:10 GMT", "version": "v1" }, { "created": "Mon, 27 Mar 2023 15:04:30 GMT", "version": "v2" } ]
2023-03-28
[ [ "Pilault", "Jonathan", "" ], [ "Galkin", "Michael", "" ], [ "Fatemi", "Bahare", "" ], [ "Taslakian", "Perouz", "" ], [ "Vasquez", "David", "" ], [ "Pal", "Christopher", "" ] ]
Recent work on Graph Neural Networks has demonstrated that self-supervised pretraining can further enhance performance on downstream graph, link, and node classification tasks. However, the efficacy of pretraining tasks has not been fully investigated for downstream large knowledge graph completion tasks. Using a contextualized knowledge graph embedding approach, we investigate five different pretraining signals, constructed using several graph algorithms and no external data, as well as their combination. We leverage the versatility of our Transformer-based model to explore graph structure generation pretraining tasks (i.e. path and k-hop neighborhood generation), typically inapplicable to most graph embedding methods. We further propose a new path-finding algorithm guided by information gain and find that it is the best-performing pretraining task across three downstream knowledge graph completion datasets. While using our new path-finding algorithm as a pretraining signal provides 2-3% MRR improvements, we show that pretraining on all signals together gives the best knowledge graph completion results. In a multitask setting that combines all pretraining tasks, our method surpasses the latest and strong performing knowledge graph embedding methods on all metrics for FB15K-237, on MRR and Hit@1 for WN18RRand on MRR and hit@10 for JF17K (a knowledge hypergraph dataset).
1912.00320
Dongrui Wu
Wen Zhang and Dongrui Wu
Discriminative Joint Probability Maximum Mean Discrepancy (DJP-MMD) for Domain Adaptation
Int'l Joint Conf. on Neural Networks (IJCNN), Glasgow, UK, July 2020
null
null
null
cs.LG cs.CV stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Maximum mean discrepancy (MMD) has been widely adopted in domain adaptation to measure the discrepancy between the source and target domain distributions. Many existing domain adaptation approaches are based on the joint MMD, which is computed as the (weighted) sum of the marginal distribution discrepancy and the conditional distribution discrepancy; however, a more natural metric may be their joint probability distribution discrepancy. Additionally, most metrics only aim to increase the transferability between domains, but ignores the discriminability between different classes, which may result in insufficient classification performance. To address these issues, discriminative joint probability MMD (DJP-MMD) is proposed in this paper to replace the frequently-used joint MMD in domain adaptation. It has two desirable properties: 1) it provides a new theoretical basis for computing the distribution discrepancy, which is simpler and more accurate; 2) it increases the transferability and discriminability simultaneously. We validate its performance by embedding it into a joint probability domain adaptation framework. Experiments on six image classification datasets demonstrated that the proposed DJP-MMD can outperform traditional MMDs.
[ { "created": "Sun, 1 Dec 2019 04:52:41 GMT", "version": "v1" }, { "created": "Tue, 14 Jan 2020 19:47:44 GMT", "version": "v2" }, { "created": "Thu, 26 Mar 2020 08:04:37 GMT", "version": "v3" }, { "created": "Fri, 10 Apr 2020 15:13:57 GMT", "version": "v4" } ]
2020-04-13
[ [ "Zhang", "Wen", "" ], [ "Wu", "Dongrui", "" ] ]
Maximum mean discrepancy (MMD) has been widely adopted in domain adaptation to measure the discrepancy between the source and target domain distributions. Many existing domain adaptation approaches are based on the joint MMD, which is computed as the (weighted) sum of the marginal distribution discrepancy and the conditional distribution discrepancy; however, a more natural metric may be their joint probability distribution discrepancy. Additionally, most metrics only aim to increase the transferability between domains, but ignores the discriminability between different classes, which may result in insufficient classification performance. To address these issues, discriminative joint probability MMD (DJP-MMD) is proposed in this paper to replace the frequently-used joint MMD in domain adaptation. It has two desirable properties: 1) it provides a new theoretical basis for computing the distribution discrepancy, which is simpler and more accurate; 2) it increases the transferability and discriminability simultaneously. We validate its performance by embedding it into a joint probability domain adaptation framework. Experiments on six image classification datasets demonstrated that the proposed DJP-MMD can outperform traditional MMDs.
1708.07810
I\~naki Esnaola
Ke Sun, Inaki Esnaola, Samir M. Perlaza and H. Vincent Poor
Information-Theoretic Attacks in the Smart Grid
2017 IEEE International Conference on Smart Grid Communications (SmartGridComm)
in Proc. IEEE Int. Conf. on Smart Grid Commun., Dresden, Germany, Oct. 2017, pp. 455-460
null
null
cs.IT cs.SY math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Gaussian random attacks that jointly minimize the amount of information obtained by the operator from the grid and the probability of attack detection are presented. The construction of the attack is posed as an optimization problem with a utility function that captures two effects: firstly, minimizing the mutual information between the measurements and the state variables; secondly, minimizing the probability of attack detection via the Kullback-Leibler divergence between the distribution of the measurements with an attack and the distribution of the measurements without an attack. Additionally, a lower bound on the utility function achieved by the attacks constructed with imperfect knowledge of the second order statistics of the state variables is obtained. The performance of the attack construction using the sample covariance matrix of the state variables is numerically evaluated. The above results are tested in the IEEE 30-Bus test system.
[ { "created": "Fri, 25 Aug 2017 17:03:10 GMT", "version": "v1" } ]
2020-04-08
[ [ "Sun", "Ke", "" ], [ "Esnaola", "Inaki", "" ], [ "Perlaza", "Samir M.", "" ], [ "Poor", "H. Vincent", "" ] ]
Gaussian random attacks that jointly minimize the amount of information obtained by the operator from the grid and the probability of attack detection are presented. The construction of the attack is posed as an optimization problem with a utility function that captures two effects: firstly, minimizing the mutual information between the measurements and the state variables; secondly, minimizing the probability of attack detection via the Kullback-Leibler divergence between the distribution of the measurements with an attack and the distribution of the measurements without an attack. Additionally, a lower bound on the utility function achieved by the attacks constructed with imperfect knowledge of the second order statistics of the state variables is obtained. The performance of the attack construction using the sample covariance matrix of the state variables is numerically evaluated. The above results are tested in the IEEE 30-Bus test system.
2407.14302
Yurong Zhang
Yurong Zhang, Honghao Chen, Xinyu Zhang, Xiangxiang Chu, Li Song
Dyn-Adapter: Towards Disentangled Representation for Efficient Visual Recognition
ECCV 2024
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Parameter-efficient transfer learning (PETL) is a promising task, aiming to adapt the large-scale pre-trained model to downstream tasks with a relatively modest cost. However, current PETL methods struggle in compressing computational complexity and bear a heavy inference burden due to the complete forward process. This paper presents an efficient visual recognition paradigm, called Dynamic Adapter (Dyn-Adapter), that boosts PETL efficiency by subtly disentangling features in multiple levels. Our approach is simple: first, we devise a dynamic architecture with balanced early heads for multi-level feature extraction, along with adaptive training strategy. Second, we introduce a bidirectional sparsity strategy driven by the pursuit of powerful generalization ability. These qualities enable us to fine-tune efficiently and effectively: we reduce FLOPs during inference by 50%, while maintaining or even yielding higher recognition accuracy. Extensive experiments on diverse datasets and pretrained backbones demonstrate the potential of Dyn-Adapter serving as a general efficiency booster for PETL in vision recognition tasks.
[ { "created": "Fri, 19 Jul 2024 13:33:38 GMT", "version": "v1" }, { "created": "Tue, 23 Jul 2024 07:57:17 GMT", "version": "v2" } ]
2024-07-24
[ [ "Zhang", "Yurong", "" ], [ "Chen", "Honghao", "" ], [ "Zhang", "Xinyu", "" ], [ "Chu", "Xiangxiang", "" ], [ "Song", "Li", "" ] ]
Parameter-efficient transfer learning (PETL) is a promising task, aiming to adapt the large-scale pre-trained model to downstream tasks with a relatively modest cost. However, current PETL methods struggle in compressing computational complexity and bear a heavy inference burden due to the complete forward process. This paper presents an efficient visual recognition paradigm, called Dynamic Adapter (Dyn-Adapter), that boosts PETL efficiency by subtly disentangling features in multiple levels. Our approach is simple: first, we devise a dynamic architecture with balanced early heads for multi-level feature extraction, along with adaptive training strategy. Second, we introduce a bidirectional sparsity strategy driven by the pursuit of powerful generalization ability. These qualities enable us to fine-tune efficiently and effectively: we reduce FLOPs during inference by 50%, while maintaining or even yielding higher recognition accuracy. Extensive experiments on diverse datasets and pretrained backbones demonstrate the potential of Dyn-Adapter serving as a general efficiency booster for PETL in vision recognition tasks.
2210.04150
Feng Liang
Feng Liang, Bichen Wu, Xiaoliang Dai, Kunpeng Li, Yinan Zhao, Hang Zhang, Peizhao Zhang, Peter Vajda, Diana Marculescu
Open-Vocabulary Semantic Segmentation with Mask-adapted CLIP
CVPR 2023. Project page: https://jeff-liangf.github.io/projects/ovseg
null
null
null
cs.CV cs.LG
http://creativecommons.org/licenses/by/4.0/
Open-vocabulary semantic segmentation aims to segment an image into semantic regions according to text descriptions, which may not have been seen during training. Recent two-stage methods first generate class-agnostic mask proposals and then leverage pre-trained vision-language models, e.g., CLIP, to classify masked regions. We identify the performance bottleneck of this paradigm to be the pre-trained CLIP model, since it does not perform well on masked images. To address this, we propose to finetune CLIP on a collection of masked image regions and their corresponding text descriptions. We collect training data by mining an existing image-caption dataset (e.g., COCO Captions), using CLIP to match masked image regions to nouns in the image captions. Compared with the more precise and manually annotated segmentation labels with fixed classes (e.g., COCO-Stuff), we find our noisy but diverse dataset can better retain CLIP's generalization ability. Along with finetuning the entire model, we utilize the "blank" areas in masked images using a method we dub mask prompt tuning. Experiments demonstrate mask prompt tuning brings significant improvement without modifying any weights of CLIP, and it can further improve a fully finetuned model. In particular, when trained on COCO and evaluated on ADE20K-150, our best model achieves 29.6% mIoU, which is +8.5% higher than the previous state-of-the-art. For the first time, open-vocabulary generalist models match the performance of supervised specialist models in 2017 without dataset-specific adaptations.
[ { "created": "Sun, 9 Oct 2022 02:57:32 GMT", "version": "v1" }, { "created": "Tue, 14 Mar 2023 02:48:42 GMT", "version": "v2" }, { "created": "Sat, 1 Apr 2023 19:00:47 GMT", "version": "v3" } ]
2023-04-04
[ [ "Liang", "Feng", "" ], [ "Wu", "Bichen", "" ], [ "Dai", "Xiaoliang", "" ], [ "Li", "Kunpeng", "" ], [ "Zhao", "Yinan", "" ], [ "Zhang", "Hang", "" ], [ "Zhang", "Peizhao", "" ], [ "Vajda", "Peter", "" ], [ "Marculescu", "Diana", "" ] ]
Open-vocabulary semantic segmentation aims to segment an image into semantic regions according to text descriptions, which may not have been seen during training. Recent two-stage methods first generate class-agnostic mask proposals and then leverage pre-trained vision-language models, e.g., CLIP, to classify masked regions. We identify the performance bottleneck of this paradigm to be the pre-trained CLIP model, since it does not perform well on masked images. To address this, we propose to finetune CLIP on a collection of masked image regions and their corresponding text descriptions. We collect training data by mining an existing image-caption dataset (e.g., COCO Captions), using CLIP to match masked image regions to nouns in the image captions. Compared with the more precise and manually annotated segmentation labels with fixed classes (e.g., COCO-Stuff), we find our noisy but diverse dataset can better retain CLIP's generalization ability. Along with finetuning the entire model, we utilize the "blank" areas in masked images using a method we dub mask prompt tuning. Experiments demonstrate mask prompt tuning brings significant improvement without modifying any weights of CLIP, and it can further improve a fully finetuned model. In particular, when trained on COCO and evaluated on ADE20K-150, our best model achieves 29.6% mIoU, which is +8.5% higher than the previous state-of-the-art. For the first time, open-vocabulary generalist models match the performance of supervised specialist models in 2017 without dataset-specific adaptations.
2012.14456
Shiv Ram Dubey
Jayendra Kantipudi, Shiv Ram Dubey, Soumendu Chakraborty
Color Channel Perturbation Attacks for Fooling Convolutional Neural Networks and A Defense Against Such Attacks
Accepted in IEEE Transactions on Artificial Intelligence
null
null
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Convolutional Neural Networks (CNNs) have emerged as a very powerful data dependent hierarchical feature extraction method. It is widely used in several computer vision problems. The CNNs learn the important visual features from training samples automatically. It is observed that the network overfits the training samples very easily. Several regularization methods have been proposed to avoid the overfitting. In spite of this, the network is sensitive to the color distribution within the images which is ignored by the existing approaches. In this paper, we discover the color robustness problem of CNN by proposing a Color Channel Perturbation (CCP) attack to fool the CNNs. In CCP attack new images are generated with new channels created by combining the original channels with the stochastic weights. Experiments were carried out over widely used CIFAR10, Caltech256 and TinyImageNet datasets in the image classification framework. The VGG, ResNet and DenseNet models are used to test the impact of the proposed attack. It is observed that the performance of the CNNs degrades drastically under the proposed CCP attack. Result show the effect of the proposed simple CCP attack over the robustness of the CNN trained model. The results are also compared with existing CNN fooling approaches to evaluate the accuracy drop. We also propose a primary defense mechanism to this problem by augmenting the training dataset with the proposed CCP attack. The state-of-the-art performance using the proposed solution in terms of the CNN robustness under CCP attack is observed in the experiments. The code is made publicly available at \url{https://github.com/jayendrakantipudi/Color-Channel-Perturbation-Attack}.
[ { "created": "Sun, 20 Dec 2020 11:35:29 GMT", "version": "v1" } ]
2021-01-01
[ [ "Kantipudi", "Jayendra", "" ], [ "Dubey", "Shiv Ram", "" ], [ "Chakraborty", "Soumendu", "" ] ]
The Convolutional Neural Networks (CNNs) have emerged as a very powerful data dependent hierarchical feature extraction method. It is widely used in several computer vision problems. The CNNs learn the important visual features from training samples automatically. It is observed that the network overfits the training samples very easily. Several regularization methods have been proposed to avoid the overfitting. In spite of this, the network is sensitive to the color distribution within the images which is ignored by the existing approaches. In this paper, we discover the color robustness problem of CNN by proposing a Color Channel Perturbation (CCP) attack to fool the CNNs. In CCP attack new images are generated with new channels created by combining the original channels with the stochastic weights. Experiments were carried out over widely used CIFAR10, Caltech256 and TinyImageNet datasets in the image classification framework. The VGG, ResNet and DenseNet models are used to test the impact of the proposed attack. It is observed that the performance of the CNNs degrades drastically under the proposed CCP attack. Result show the effect of the proposed simple CCP attack over the robustness of the CNN trained model. The results are also compared with existing CNN fooling approaches to evaluate the accuracy drop. We also propose a primary defense mechanism to this problem by augmenting the training dataset with the proposed CCP attack. The state-of-the-art performance using the proposed solution in terms of the CNN robustness under CCP attack is observed in the experiments. The code is made publicly available at \url{https://github.com/jayendrakantipudi/Color-Channel-Perturbation-Attack}.
2007.11643
Andrej Sajenko
Frank Kammer and Andrej Sajenko
Space-Efficient Graph Kernelizations
null
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Let $n$ be the size of a parameterized problem and $k$ the parameter. We present kernels for Feedback Vertex Set, Path Contraction and Cluster Editing/Deletion whose sizes are all polynomial in $k$ and that are computable in polynomial time and with $O(\rm{poly}(k) \log n)$ bits (of working memory). By using kernel cascades, we obtain the best known kernels in polynomial time with $O(\rm{poly}(k) \log n)$ bits.
[ { "created": "Wed, 22 Jul 2020 19:39:05 GMT", "version": "v1" }, { "created": "Thu, 4 Mar 2021 11:09:09 GMT", "version": "v2" }, { "created": "Tue, 12 Sep 2023 17:51:12 GMT", "version": "v3" }, { "created": "Tue, 20 Feb 2024 15:43:26 GMT", "version": "v4" } ]
2024-02-21
[ [ "Kammer", "Frank", "" ], [ "Sajenko", "Andrej", "" ] ]
Let $n$ be the size of a parameterized problem and $k$ the parameter. We present kernels for Feedback Vertex Set, Path Contraction and Cluster Editing/Deletion whose sizes are all polynomial in $k$ and that are computable in polynomial time and with $O(\rm{poly}(k) \log n)$ bits (of working memory). By using kernel cascades, we obtain the best known kernels in polynomial time with $O(\rm{poly}(k) \log n)$ bits.
2005.06123
Ming-Chang Chiu
Ming-Chang Chiu, Tiantian Feng, Xiang Ren, Shrikanth Narayanan
Screenplay Quality Assessment: Can We Predict Who Gets Nominated?
4 pages, 3 figures, accepted to ACL NUSE workshop 2020
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deciding which scripts to turn into movies is a costly and time-consuming process for filmmakers. Thus, building a tool to aid script selection, an initial phase in movie production, can be very beneficial. Toward that goal, in this work, we present a method to evaluate the quality of a screenplay based on linguistic cues. We address this in a two-fold approach: (1) we define the task as predicting nominations of scripts at major film awards with the hypothesis that the peer-recognized scripts should have a greater chance to succeed. (2) based on industry opinions and narratology, we extract and integrate domain-specific features into common classification techniques. We face two challenges (1) scripts are much longer than other document datasets (2) nominated scripts are limited and thus difficult to collect. However, with narratology-inspired modeling and domain features, our approach offers clear improvements over strong baselines. Our work provides a new approach for future work in screenplay analysis.
[ { "created": "Wed, 13 May 2020 02:39:56 GMT", "version": "v1" } ]
2020-05-14
[ [ "Chiu", "Ming-Chang", "" ], [ "Feng", "Tiantian", "" ], [ "Ren", "Xiang", "" ], [ "Narayanan", "Shrikanth", "" ] ]
Deciding which scripts to turn into movies is a costly and time-consuming process for filmmakers. Thus, building a tool to aid script selection, an initial phase in movie production, can be very beneficial. Toward that goal, in this work, we present a method to evaluate the quality of a screenplay based on linguistic cues. We address this in a two-fold approach: (1) we define the task as predicting nominations of scripts at major film awards with the hypothesis that the peer-recognized scripts should have a greater chance to succeed. (2) based on industry opinions and narratology, we extract and integrate domain-specific features into common classification techniques. We face two challenges (1) scripts are much longer than other document datasets (2) nominated scripts are limited and thus difficult to collect. However, with narratology-inspired modeling and domain features, our approach offers clear improvements over strong baselines. Our work provides a new approach for future work in screenplay analysis.
1703.07952
Fei Wen
Fei Wen, Yuan Yang, Ling Pei, Wenxian Yu, and Peilin Liu
Efficient and Robust Recovery of Sparse Signal and Image Using Generalized Nonconvex Regularization
13 pages, 5 figures
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This work addresses the robust reconstruction problem of a sparse signal from compressed measurements. We propose a robust formulation for sparse reconstruction which employs the $\ell_1$-norm as the loss function for the residual error and utilizes a generalized nonconvex penalty for sparsity inducing. The $\ell_1$-loss is less sensitive to outliers in the measurements than the popular $\ell_2$-loss, while the nonconvex penalty has the capability of ameliorating the bias problem of the popular convex LASSO penalty and thus can yield more accurate recovery. To solve this nonconvex and nonsmooth minimization formulation efficiently, we propose a first-order algorithm based on alternating direction method of multipliers (ADMM). A smoothing strategy on the $\ell_1$-loss function has been used in deriving the new algorithm to make it convergent. Further, a sufficient condition for the convergence of the new algorithm has been provided for generalized nonconvex regularization. In comparison with several state-of-the-art algorithms, the new algorithm showed better performance in numerical experiments in recovering sparse signals and compressible images. The new algorithm scales well for large-scale problems, as often encountered in image processing.
[ { "created": "Thu, 23 Mar 2017 07:36:45 GMT", "version": "v1" }, { "created": "Wed, 29 Mar 2017 08:39:43 GMT", "version": "v2" } ]
2017-03-30
[ [ "Wen", "Fei", "" ], [ "Yang", "Yuan", "" ], [ "Pei", "Ling", "" ], [ "Yu", "Wenxian", "" ], [ "Liu", "Peilin", "" ] ]
This work addresses the robust reconstruction problem of a sparse signal from compressed measurements. We propose a robust formulation for sparse reconstruction which employs the $\ell_1$-norm as the loss function for the residual error and utilizes a generalized nonconvex penalty for sparsity inducing. The $\ell_1$-loss is less sensitive to outliers in the measurements than the popular $\ell_2$-loss, while the nonconvex penalty has the capability of ameliorating the bias problem of the popular convex LASSO penalty and thus can yield more accurate recovery. To solve this nonconvex and nonsmooth minimization formulation efficiently, we propose a first-order algorithm based on alternating direction method of multipliers (ADMM). A smoothing strategy on the $\ell_1$-loss function has been used in deriving the new algorithm to make it convergent. Further, a sufficient condition for the convergence of the new algorithm has been provided for generalized nonconvex regularization. In comparison with several state-of-the-art algorithms, the new algorithm showed better performance in numerical experiments in recovering sparse signals and compressible images. The new algorithm scales well for large-scale problems, as often encountered in image processing.
2012.01925
Tatiana L\'opez-Guevara
Tatiana Lopez-Guevara, Michael Burke, Nicholas K. Taylor, Kartic Subr
IV-Posterior: Inverse Value Estimation for Interpretable Policy Certificates
null
null
null
null
cs.LG cs.AI cs.RO
http://creativecommons.org/licenses/by/4.0/
Model-free reinforcement learning (RL) is a powerful tool to learn a broad range of robot skills and policies. However, a lack of policy interpretability can inhibit their successful deployment in downstream applications, particularly when differences in environmental conditions may result in unpredictable behaviour or generalisation failures. As a result, there has been a growing emphasis in machine learning around the inclusion of stronger inductive biases in models to improve generalisation. This paper proposes an alternative strategy, inverse value estimation for interpretable policy certificates (IV-Posterior), which seeks to identify the inductive biases or idealised conditions of operation already held by pre-trained policies, and then use this information to guide their deployment. IV-Posterior uses MaskedAutoregressive Flows to fit distributions over the set of conditions or environmental parameters in which a policy is likely to be effective. This distribution can then be used as a policy certificate in downstream applications. We illustrate the use of IV-Posterior across a two environments, and show that substantial performance gains can be obtained when policy selection incorporates knowledge of the inductive biases that these policies hold.
[ { "created": "Mon, 30 Nov 2020 21:45:49 GMT", "version": "v1" } ]
2020-12-04
[ [ "Lopez-Guevara", "Tatiana", "" ], [ "Burke", "Michael", "" ], [ "Taylor", "Nicholas K.", "" ], [ "Subr", "Kartic", "" ] ]
Model-free reinforcement learning (RL) is a powerful tool to learn a broad range of robot skills and policies. However, a lack of policy interpretability can inhibit their successful deployment in downstream applications, particularly when differences in environmental conditions may result in unpredictable behaviour or generalisation failures. As a result, there has been a growing emphasis in machine learning around the inclusion of stronger inductive biases in models to improve generalisation. This paper proposes an alternative strategy, inverse value estimation for interpretable policy certificates (IV-Posterior), which seeks to identify the inductive biases or idealised conditions of operation already held by pre-trained policies, and then use this information to guide their deployment. IV-Posterior uses MaskedAutoregressive Flows to fit distributions over the set of conditions or environmental parameters in which a policy is likely to be effective. This distribution can then be used as a policy certificate in downstream applications. We illustrate the use of IV-Posterior across a two environments, and show that substantial performance gains can be obtained when policy selection incorporates knowledge of the inductive biases that these policies hold.