id
stringlengths
9
10
submitter
stringlengths
1
64
authors
stringlengths
4
20.7k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
404
doi
stringlengths
11
153
report-no
stringlengths
2
254
categories
stringlengths
5
98
license
stringclasses
9 values
orig_abstract
stringlengths
14
3.35k
versions
listlengths
1
60
update_date
stringlengths
10
10
authors_parsed
listlengths
1
1.35k
abstract
stringlengths
11
3.34k
2206.01990
Lorenzo Tamellini
Emily A. Baker, Alessandro Cappato, Sara Todeschini, Lorenzo Tamellini, Giancarlo Sangalli, Alessandro Reali, Sauro Manenti
Combining the Morris Method and Multiple Error Metrics to Assess Aquifer Characteristics and Recharge in the Lower Ticino Basin, in Italy
second submission after minor revisions
null
null
null
cs.CE
http://creativecommons.org/licenses/by-nc-nd/4.0/
Groundwater flow model accuracy is often limited by the uncertainty in model parameters that characterize aquifer properties and aquifer recharge. Aquifer properties such as hydraulic conductivity can have an uncertainty spanning orders of magnitude. Meanwhile, parameters used to configure model boundary conditions can introduce additional uncertainty. In this study, the Morris Method sensitivity analysis is performed on multiple quantities of interest to assess the sensitivity of a steady-state groundwater flow model to uncertain input parameters. The Morris Method determines which of these parameters are less influential on model outputs. Uninfluential parameters can be set constant during subsequent parameter optimization to reduce computational expense. Combining multiple quantities of interest (e.g., RMSE, groundwater fluxes) when performing both the Morris Method and parameter optimization offers a more complete assessment of groundwater models, providing a more reliable and physically consistent estimate of uncertain parameters. The parameter optimization procedure also provides us an estimate of the residual uncertainty in the parameter values, resulting in a more complete estimate of the remaining uncertainty. By employing such techniques, the current study was able to estimate the aquifer hydraulic conductivity and recharge rate due to rice field irrigation in a groundwater basin in Northern Italy, revealing that a significant proportion of surficial aquifer recharge (approximately 81-94%) during the later summer is due to the flood irrigation practices applied to these fields.
[ { "created": "Sat, 4 Jun 2022 13:28:39 GMT", "version": "v1" }, { "created": "Thu, 8 Sep 2022 08:12:56 GMT", "version": "v2" } ]
2022-09-09
[ [ "Baker", "Emily A.", "" ], [ "Cappato", "Alessandro", "" ], [ "Todeschini", "Sara", "" ], [ "Tamellini", "Lorenzo", "" ], [ "Sangalli", "Giancarlo", "" ], [ "Reali", "Alessandro", "" ], [ "Manenti", "Sauro", "" ] ]
Groundwater flow model accuracy is often limited by the uncertainty in model parameters that characterize aquifer properties and aquifer recharge. Aquifer properties such as hydraulic conductivity can have an uncertainty spanning orders of magnitude. Meanwhile, parameters used to configure model boundary conditions can introduce additional uncertainty. In this study, the Morris Method sensitivity analysis is performed on multiple quantities of interest to assess the sensitivity of a steady-state groundwater flow model to uncertain input parameters. The Morris Method determines which of these parameters are less influential on model outputs. Uninfluential parameters can be set constant during subsequent parameter optimization to reduce computational expense. Combining multiple quantities of interest (e.g., RMSE, groundwater fluxes) when performing both the Morris Method and parameter optimization offers a more complete assessment of groundwater models, providing a more reliable and physically consistent estimate of uncertain parameters. The parameter optimization procedure also provides us an estimate of the residual uncertainty in the parameter values, resulting in a more complete estimate of the remaining uncertainty. By employing such techniques, the current study was able to estimate the aquifer hydraulic conductivity and recharge rate due to rice field irrigation in a groundwater basin in Northern Italy, revealing that a significant proportion of surficial aquifer recharge (approximately 81-94%) during the later summer is due to the flood irrigation practices applied to these fields.
2404.12143
Hilde Weerts
Hilde Weerts, Rapha\"ele Xenidis, Fabien Tarissan, Henrik Palmer Olsen, Mykola Pechenizkiy
The Neutrality Fallacy: When Algorithmic Fairness Interventions are (Not) Positive Action
null
2024 ACM Conference on Fairness, Accountability, and Transparency (FAccT '24)
10.1145/3630106.3659025
null
cs.AI cs.CY
http://creativecommons.org/licenses/by/4.0/
Various metrics and interventions have been developed to identify and mitigate unfair outputs of machine learning systems. While individuals and organizations have an obligation to avoid discrimination, the use of fairness-aware machine learning interventions has also been described as amounting to 'algorithmic positive action' under European Union (EU) non-discrimination law. As the Court of Justice of the European Union has been strict when it comes to assessing the lawfulness of positive action, this would impose a significant legal burden on those wishing to implement fair-ml interventions. In this paper, we propose that algorithmic fairness interventions often should be interpreted as a means to prevent discrimination, rather than a measure of positive action. Specifically, we suggest that this category mistake can often be attributed to neutrality fallacies: faulty assumptions regarding the neutrality of fairness-aware algorithmic decision-making. Our findings raise the question of whether a negative obligation to refrain from discrimination is sufficient in the context of algorithmic decision-making. Consequently, we suggest moving away from a duty to 'not do harm' towards a positive obligation to actively 'do no harm' as a more adequate framework for algorithmic decision-making and fair ml-interventions.
[ { "created": "Thu, 18 Apr 2024 12:44:35 GMT", "version": "v1" } ]
2024-04-19
[ [ "Weerts", "Hilde", "" ], [ "Xenidis", "Raphaële", "" ], [ "Tarissan", "Fabien", "" ], [ "Olsen", "Henrik Palmer", "" ], [ "Pechenizkiy", "Mykola", "" ] ]
Various metrics and interventions have been developed to identify and mitigate unfair outputs of machine learning systems. While individuals and organizations have an obligation to avoid discrimination, the use of fairness-aware machine learning interventions has also been described as amounting to 'algorithmic positive action' under European Union (EU) non-discrimination law. As the Court of Justice of the European Union has been strict when it comes to assessing the lawfulness of positive action, this would impose a significant legal burden on those wishing to implement fair-ml interventions. In this paper, we propose that algorithmic fairness interventions often should be interpreted as a means to prevent discrimination, rather than a measure of positive action. Specifically, we suggest that this category mistake can often be attributed to neutrality fallacies: faulty assumptions regarding the neutrality of fairness-aware algorithmic decision-making. Our findings raise the question of whether a negative obligation to refrain from discrimination is sufficient in the context of algorithmic decision-making. Consequently, we suggest moving away from a duty to 'not do harm' towards a positive obligation to actively 'do no harm' as a more adequate framework for algorithmic decision-making and fair ml-interventions.
2311.12205
Mansi Girdhar
Mansi Girdhar, Junho Hong, Wencong Su, Akila Herath, Chen-Ching Liu
SDN-Based Dynamic Cybersecurity Framework of IEC-61850 Communications in Smart Grid
5 pages, 6 figures, 1 table, conference paper, supported by DOE (CESER) program
null
null
null
cs.CR cs.CY
http://creativecommons.org/licenses/by-sa/4.0/
In recent years, critical infrastructure and power grids have experienced a series of cyber-attacks, leading to temporary, widespread blackouts of considerable magnitude. Since most substations are unmanned and have limited physical security protection, cyber breaches into power grid substations present a risk. Nowadays, software-defined network (SDN), a popular virtual network technology based on the OpenFlow protocol is being widely used in the substation automation system. However, the susceptibility of SDN architecture to cyber-attacks has exhibited a notable increase in recent years, as indicated by research findings. This suggests a growing concern regarding the potential for cybersecurity breaches within the SDN framework. In this paper, we propose a hybrid intrusion detection system (IDS)-integrated SDN architecture for detecting and preventing the injection of malicious IEC 61850-based generic object-oriented substation event (GOOSE) messages in a digital substation. Additionally, this program locates the fault's location and, as a form of mitigation, disables a certain port. Furthermore, implementation examples are demonstrated and verified using a hardware-in-the-loop (HIL) testbed that mimics the functioning of a digital substation.
[ { "created": "Mon, 20 Nov 2023 21:49:41 GMT", "version": "v1" }, { "created": "Thu, 7 Mar 2024 17:17:43 GMT", "version": "v2" } ]
2024-03-08
[ [ "Girdhar", "Mansi", "" ], [ "Hong", "Junho", "" ], [ "Su", "Wencong", "" ], [ "Herath", "Akila", "" ], [ "Liu", "Chen-Ching", "" ] ]
In recent years, critical infrastructure and power grids have experienced a series of cyber-attacks, leading to temporary, widespread blackouts of considerable magnitude. Since most substations are unmanned and have limited physical security protection, cyber breaches into power grid substations present a risk. Nowadays, software-defined network (SDN), a popular virtual network technology based on the OpenFlow protocol is being widely used in the substation automation system. However, the susceptibility of SDN architecture to cyber-attacks has exhibited a notable increase in recent years, as indicated by research findings. This suggests a growing concern regarding the potential for cybersecurity breaches within the SDN framework. In this paper, we propose a hybrid intrusion detection system (IDS)-integrated SDN architecture for detecting and preventing the injection of malicious IEC 61850-based generic object-oriented substation event (GOOSE) messages in a digital substation. Additionally, this program locates the fault's location and, as a form of mitigation, disables a certain port. Furthermore, implementation examples are demonstrated and verified using a hardware-in-the-loop (HIL) testbed that mimics the functioning of a digital substation.
2303.01042
Yuhu Shang
Yuhu Shang, Xuexiong Luo, Lihong Wang, Hao Peng, Xiankun Zhang, Yimeng Ren, Kun Liang
Reinforcement Learning Guided Multi-Objective Exam Paper Generation
null
null
null
null
cs.LG cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
To reduce the repetitive and complex work of instructors, exam paper generation (EPG) technique has become a salient topic in the intelligent education field, which targets at generating high-quality exam paper automatically according to instructor-specified assessment criteria. The current advances utilize the ability of heuristic algorithms to optimize several well-known objective constraints, such as difficulty degree, number of questions, etc., for producing optimal solutions. However, in real scenarios, considering other equally relevant objectives (e.g., distribution of exam scores, skill coverage) is extremely important. Besides, how to develop an automatic multi-objective solution that finds an optimal subset of questions from a huge search space of large-sized question datasets and thus composes a high-quality exam paper is urgent but non-trivial. To this end, we skillfully design a reinforcement learning guided Multi-Objective Exam Paper Generation framework, termed MOEPG, to simultaneously optimize three exam domain-specific objectives including difficulty degree, distribution of exam scores, and skill coverage. Specifically, to accurately measure the skill proficiency of the examinee group, we first employ deep knowledge tracing to model the interaction information between examinees and response logs. We then design the flexible Exam Q-Network, a function approximator, which automatically selects the appropriate question to update the exam paper composition process. Later, MOEPG divides the decision space into multiple subspaces to better guide the updated direction of the exam paper. Through extensive experiments on two real-world datasets, we demonstrate that MOEPG is feasible in addressing the multiple dilemmas of exam paper generation scenario.
[ { "created": "Thu, 2 Mar 2023 07:55:52 GMT", "version": "v1" } ]
2023-03-03
[ [ "Shang", "Yuhu", "" ], [ "Luo", "Xuexiong", "" ], [ "Wang", "Lihong", "" ], [ "Peng", "Hao", "" ], [ "Zhang", "Xiankun", "" ], [ "Ren", "Yimeng", "" ], [ "Liang", "Kun", "" ] ]
To reduce the repetitive and complex work of instructors, exam paper generation (EPG) technique has become a salient topic in the intelligent education field, which targets at generating high-quality exam paper automatically according to instructor-specified assessment criteria. The current advances utilize the ability of heuristic algorithms to optimize several well-known objective constraints, such as difficulty degree, number of questions, etc., for producing optimal solutions. However, in real scenarios, considering other equally relevant objectives (e.g., distribution of exam scores, skill coverage) is extremely important. Besides, how to develop an automatic multi-objective solution that finds an optimal subset of questions from a huge search space of large-sized question datasets and thus composes a high-quality exam paper is urgent but non-trivial. To this end, we skillfully design a reinforcement learning guided Multi-Objective Exam Paper Generation framework, termed MOEPG, to simultaneously optimize three exam domain-specific objectives including difficulty degree, distribution of exam scores, and skill coverage. Specifically, to accurately measure the skill proficiency of the examinee group, we first employ deep knowledge tracing to model the interaction information between examinees and response logs. We then design the flexible Exam Q-Network, a function approximator, which automatically selects the appropriate question to update the exam paper composition process. Later, MOEPG divides the decision space into multiple subspaces to better guide the updated direction of the exam paper. Through extensive experiments on two real-world datasets, we demonstrate that MOEPG is feasible in addressing the multiple dilemmas of exam paper generation scenario.
0802.3563
Usman Khan
Usman A. Khan, Soummya Kar, and Jose' M. F. Moura
Distributed Sensor Localization in Random Environments using Minimal Number of Anchor Nodes
30 pages, submitted to IEEE Transactions on Signal Processing
U. A. Khan, S. Kar, and J. M. F. Moura, "Distributed sensor localization in random environments using minimal number of anchor nodes," IEEE Transactions on Signal Processing, vol. 57, no. 5, pp. 2000-2016, May 2009
10.1109/TSP.2009.2014812
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The paper develops DILOC, a \emph{distributive}, \emph{iterative} algorithm that locates M sensors in $\mathbb{R}^m, m\geq 1$, with respect to a minimal number of m+1 anchors with known locations. The sensors exchange data with their neighbors only; no centralized data processing or communication occurs, nor is there centralized knowledge about the sensors' locations. DILOC uses the barycentric coordinates of a sensor with respect to its neighbors that are computed using the Cayley-Menger determinants. These are the determinants of matrices of inter-sensor distances. We show convergence of DILOC by associating with it an absorbing Markov chain whose absorbing states are the anchors. We introduce a stochastic approximation version extending DILOC to random environments when the knowledge about the intercommunications among sensors and the inter-sensor distances are noisy, and the communication links among neighbors fail at random times. We show a.s. convergence of the modified DILOC and characterize the error between the final estimates and the true values of the sensors' locations. Numerical studies illustrate DILOC under a variety of deterministic and random operating conditions.
[ { "created": "Mon, 25 Feb 2008 07:29:19 GMT", "version": "v1" }, { "created": "Thu, 7 Aug 2008 03:07:12 GMT", "version": "v2" } ]
2013-12-19
[ [ "Khan", "Usman A.", "" ], [ "Kar", "Soummya", "" ], [ "Moura", "Jose' M. F.", "" ] ]
The paper develops DILOC, a \emph{distributive}, \emph{iterative} algorithm that locates M sensors in $\mathbb{R}^m, m\geq 1$, with respect to a minimal number of m+1 anchors with known locations. The sensors exchange data with their neighbors only; no centralized data processing or communication occurs, nor is there centralized knowledge about the sensors' locations. DILOC uses the barycentric coordinates of a sensor with respect to its neighbors that are computed using the Cayley-Menger determinants. These are the determinants of matrices of inter-sensor distances. We show convergence of DILOC by associating with it an absorbing Markov chain whose absorbing states are the anchors. We introduce a stochastic approximation version extending DILOC to random environments when the knowledge about the intercommunications among sensors and the inter-sensor distances are noisy, and the communication links among neighbors fail at random times. We show a.s. convergence of the modified DILOC and characterize the error between the final estimates and the true values of the sensors' locations. Numerical studies illustrate DILOC under a variety of deterministic and random operating conditions.
2104.11396
Linjian Ma
Linjian Ma and Chao Yang
Low Rank Approximation in Simulations of Quantum Algorithms
null
null
null
null
cs.CE cs.NA math.NA
http://creativecommons.org/licenses/by/4.0/
Simulating quantum algorithms on classical computers is challenging when the system size, i.e., the number of qubits used in the quantum algorithm, is moderately large. However, some quantum algorithms and the corresponding quantum circuits can be simulated efficiently on a classical computer if the input quantum state is a low-rank tensor and all intermediate states of the quantum algorithm can be represented or approximated by low-rank tensors. In this paper, we examine the possibility of simulating a few quantum algorithms by using low-rank canonical polyadic (CP) decomposition to represent the input and all intermediate states of these algorithms. Two rank reduction algorithms are used to enable efficient simulation. We show that some of the algorithms preserve the low-rank structure of the input state and can thus be efficiently simulated on a classical computer. However, the rank of the intermediate states in other quantum algorithms can increase rapidly, making efficient simulation more difficult. To some extent, such difficulty reflects the advantage or superiority of a quantum computer over a classical computer. As a result, understanding the low-rank structure of a quantum algorithm allows us to identify algorithms that can benefit significantly from quantum computers.
[ { "created": "Fri, 23 Apr 2021 03:12:52 GMT", "version": "v1" } ]
2021-04-26
[ [ "Ma", "Linjian", "" ], [ "Yang", "Chao", "" ] ]
Simulating quantum algorithms on classical computers is challenging when the system size, i.e., the number of qubits used in the quantum algorithm, is moderately large. However, some quantum algorithms and the corresponding quantum circuits can be simulated efficiently on a classical computer if the input quantum state is a low-rank tensor and all intermediate states of the quantum algorithm can be represented or approximated by low-rank tensors. In this paper, we examine the possibility of simulating a few quantum algorithms by using low-rank canonical polyadic (CP) decomposition to represent the input and all intermediate states of these algorithms. Two rank reduction algorithms are used to enable efficient simulation. We show that some of the algorithms preserve the low-rank structure of the input state and can thus be efficiently simulated on a classical computer. However, the rank of the intermediate states in other quantum algorithms can increase rapidly, making efficient simulation more difficult. To some extent, such difficulty reflects the advantage or superiority of a quantum computer over a classical computer. As a result, understanding the low-rank structure of a quantum algorithm allows us to identify algorithms that can benefit significantly from quantum computers.
2309.09112
Jackson Woodruff
Jackson Woodruff and Thomas Koehler and Alexander Brauckmann and Chris Cummins and Sam Ainsworth and Michael F.P. O'Boyle
Rewriting History: Repurposing Domain-Specific CGRAs
null
null
null
null
cs.PL cs.AR
http://creativecommons.org/licenses/by/4.0/
Coarse-grained reconfigurable arrays (CGRAs) are domain-specific devices promising both the flexibility of FPGAs and the performance of ASICs. However, with restricted domains comes a danger: designing chips that cannot accelerate enough current and future software to justify the hardware cost. We introduce FlexC, the first flexible CGRA compiler, which allows CGRAs to be adapted to operations they do not natively support. FlexC uses dataflow rewriting, replacing unsupported regions of code with equivalent operations that are supported by the CGRA. We use equality saturation, a technique enabling efficient exploration of a large space of rewrite rules, to effectively search through the program-space for supported programs. We applied FlexC to over 2,000 loop kernels, compiling to four different research CGRAs and 300 generated CGRAs and demonstrate a 2.2$\times$ increase in the number of loop kernels accelerated leading to 3$\times$ speedup compared to an Arm A5 CPU on kernels that would otherwise be unsupported by the accelerator.
[ { "created": "Sat, 16 Sep 2023 23:58:55 GMT", "version": "v1" } ]
2023-09-19
[ [ "Woodruff", "Jackson", "" ], [ "Koehler", "Thomas", "" ], [ "Brauckmann", "Alexander", "" ], [ "Cummins", "Chris", "" ], [ "Ainsworth", "Sam", "" ], [ "O'Boyle", "Michael F. P.", "" ] ]
Coarse-grained reconfigurable arrays (CGRAs) are domain-specific devices promising both the flexibility of FPGAs and the performance of ASICs. However, with restricted domains comes a danger: designing chips that cannot accelerate enough current and future software to justify the hardware cost. We introduce FlexC, the first flexible CGRA compiler, which allows CGRAs to be adapted to operations they do not natively support. FlexC uses dataflow rewriting, replacing unsupported regions of code with equivalent operations that are supported by the CGRA. We use equality saturation, a technique enabling efficient exploration of a large space of rewrite rules, to effectively search through the program-space for supported programs. We applied FlexC to over 2,000 loop kernels, compiling to four different research CGRAs and 300 generated CGRAs and demonstrate a 2.2$\times$ increase in the number of loop kernels accelerated leading to 3$\times$ speedup compared to an Arm A5 CPU on kernels that would otherwise be unsupported by the accelerator.
1401.5612
Aymen Louati
Aymen Louati, Chadlia Jerad, Kamel Barkaoui
Formalization and Verification of Hierarchical Use of Interaction Overview Diagrams Using Timing Diagrams
8 pages, 6 figures
null
null
null
cs.SE cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Thanks to its graphical notation and simplicity, Unified Modeling Language (UML) is a de facto standard and a widespread language used in both industry and academia, despite the fact that its semantics is still informal. The Interaction Overview Diagram (IOD) is introduced in UML2; it allows the specification of the behavior in the hierarchical way. This paper is a contribution towards a formal dynamic semantics of UML2. We start by formalizing the Hierarchical use of IOD. Afterward, we complete the mapping of IOD, Sequence Diagrams and Timing Diagrams into Hierarchical Colored Petri Nets (HCPNs) using the Timed colored Petri Nets (timed CP-net). Our approach helps designers to get benefits from abstraction as well as refinement at more than two levels of hierarchy which reduces verification complexity.
[ { "created": "Wed, 22 Jan 2014 10:22:32 GMT", "version": "v1" } ]
2014-01-23
[ [ "Louati", "Aymen", "" ], [ "Jerad", "Chadlia", "" ], [ "Barkaoui", "Kamel", "" ] ]
Thanks to its graphical notation and simplicity, Unified Modeling Language (UML) is a de facto standard and a widespread language used in both industry and academia, despite the fact that its semantics is still informal. The Interaction Overview Diagram (IOD) is introduced in UML2; it allows the specification of the behavior in the hierarchical way. This paper is a contribution towards a formal dynamic semantics of UML2. We start by formalizing the Hierarchical use of IOD. Afterward, we complete the mapping of IOD, Sequence Diagrams and Timing Diagrams into Hierarchical Colored Petri Nets (HCPNs) using the Timed colored Petri Nets (timed CP-net). Our approach helps designers to get benefits from abstraction as well as refinement at more than two levels of hierarchy which reduces verification complexity.
2004.06286
Dongrui Wu
Dongrui Wu and Yifan Xu and Bao-Liang Lu
Transfer Learning for EEG-Based Brain-Computer Interfaces: A Review of Progress Made Since 2016
null
IEEE Trans. on Cognitive and Developmental Systems, 14(1):4-19, 2022
null
null
cs.HC cs.LG eess.SP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A brain-computer interface (BCI) enables a user to communicate with a computer directly using brain signals. The most common non-invasive BCI modality, electroencephalogram (EEG), is sensitive to noise/artifact and suffers between-subject/within-subject non-stationarity. Therefore, it is difficult to build a generic pattern recognition model in an EEG-based BCI system that is optimal for different subjects, during different sessions, for different devices and tasks. Usually, a calibration session is needed to collect some training data for a new subject, which is time-consuming and user unfriendly. Transfer learning (TL), which utilizes data or knowledge from similar or relevant subjects/sessions/devices/tasks to facilitate learning for a new subject/session/device/task, is frequently used to reduce the amount of calibration effort. This paper reviews journal publications on TL approaches in EEG-based BCIs in the last few years, i.e., since 2016. Six paradigms and applications -- motor imagery, event-related potentials, steady-state visual evoked potentials, affective BCIs, regression problems, and adversarial attacks -- are considered. For each paradigm/application, we group the TL approaches into cross-subject/session, cross-device, and cross-task settings and review them separately. Observations and conclusions are made at the end of the paper, which may point to future research directions.
[ { "created": "Mon, 13 Apr 2020 16:44:55 GMT", "version": "v1" }, { "created": "Sat, 18 Apr 2020 22:13:09 GMT", "version": "v2" }, { "created": "Wed, 6 May 2020 22:19:40 GMT", "version": "v3" }, { "created": "Fri, 3 Jul 2020 23:34:11 GMT", "version": "v4" } ]
2022-11-15
[ [ "Wu", "Dongrui", "" ], [ "Xu", "Yifan", "" ], [ "Lu", "Bao-Liang", "" ] ]
A brain-computer interface (BCI) enables a user to communicate with a computer directly using brain signals. The most common non-invasive BCI modality, electroencephalogram (EEG), is sensitive to noise/artifact and suffers between-subject/within-subject non-stationarity. Therefore, it is difficult to build a generic pattern recognition model in an EEG-based BCI system that is optimal for different subjects, during different sessions, for different devices and tasks. Usually, a calibration session is needed to collect some training data for a new subject, which is time-consuming and user unfriendly. Transfer learning (TL), which utilizes data or knowledge from similar or relevant subjects/sessions/devices/tasks to facilitate learning for a new subject/session/device/task, is frequently used to reduce the amount of calibration effort. This paper reviews journal publications on TL approaches in EEG-based BCIs in the last few years, i.e., since 2016. Six paradigms and applications -- motor imagery, event-related potentials, steady-state visual evoked potentials, affective BCIs, regression problems, and adversarial attacks -- are considered. For each paradigm/application, we group the TL approaches into cross-subject/session, cross-device, and cross-task settings and review them separately. Observations and conclusions are made at the end of the paper, which may point to future research directions.
1102.5400
Xianfu Chen
Xianfu Chen, Zhifeng Zhao, and Honggang Zhang
Power Allocation for Cognitive Wireless Mesh Networks by Applying Multi-agent Q-learning Approach
null
null
null
null
cs.IT math.IT
http://creativecommons.org/licenses/by-nc-sa/3.0/
As the scarce spectrum resource is becoming over-crowded, cognitive radios (CRs) indicate great flexibility to improve the spectrum efficiency by opportunistically accessing the authorized frequency bands. One of the critical challenges for operating such radios in a network is how to efficiently allocate transmission powers and frequency resource among the secondary users (SUs) while satisfying the quality-of-service (QoS) constraints of the primary users (PUs). In this paper, we focus on the non-cooperative power allocation problem in cognitive wireless mesh networks (CogMesh) formed by a number of clusters with the consideration of energy efficiency. Due to the SUs' selfish and spontaneous properties, the problem is modeled as a stochastic learning process. We first extend the single-agent Q-learning to a multi-user context, and then propose a conjecture based multi-agent Qlearning algorithm to achieve the optimal transmission strategies with only private and incomplete information. An intelligent SU performs Q-function updates based on the conjecture over the other SUs' stochastic behaviors. This learning algorithm provably converges given certain restrictions that arise during learning procedure. Simulation experiments are used to verify the performance of our algorithm and demonstrate its effectiveness of improving the energy efficiency.
[ { "created": "Sat, 26 Feb 2011 10:29:23 GMT", "version": "v1" } ]
2011-03-01
[ [ "Chen", "Xianfu", "" ], [ "Zhao", "Zhifeng", "" ], [ "Zhang", "Honggang", "" ] ]
As the scarce spectrum resource is becoming over-crowded, cognitive radios (CRs) indicate great flexibility to improve the spectrum efficiency by opportunistically accessing the authorized frequency bands. One of the critical challenges for operating such radios in a network is how to efficiently allocate transmission powers and frequency resource among the secondary users (SUs) while satisfying the quality-of-service (QoS) constraints of the primary users (PUs). In this paper, we focus on the non-cooperative power allocation problem in cognitive wireless mesh networks (CogMesh) formed by a number of clusters with the consideration of energy efficiency. Due to the SUs' selfish and spontaneous properties, the problem is modeled as a stochastic learning process. We first extend the single-agent Q-learning to a multi-user context, and then propose a conjecture based multi-agent Qlearning algorithm to achieve the optimal transmission strategies with only private and incomplete information. An intelligent SU performs Q-function updates based on the conjecture over the other SUs' stochastic behaviors. This learning algorithm provably converges given certain restrictions that arise during learning procedure. Simulation experiments are used to verify the performance of our algorithm and demonstrate its effectiveness of improving the energy efficiency.
2404.02806
Hussein Mozannar
Hussein Mozannar, Valerie Chen, Mohammed Alsobay, Subhro Das, Sebastian Zhao, Dennis Wei, Manish Nagireddy, Prasanna Sattigeri, Ameet Talwalkar, David Sontag
The RealHumanEval: Evaluating Large Language Models' Abilities to Support Programmers
null
null
null
null
cs.SE cs.AI cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Evaluation of large language models (LLMs) for code has primarily relied on static benchmarks, including HumanEval (Chen et al., 2021), which measure the ability of LLMs to generate complete code that passes unit tests. As LLMs are increasingly used as programmer assistants, we study whether gains on existing benchmarks translate to gains in programmer productivity when coding with LLMs, including time spent coding. In addition to static benchmarks, we investigate the utility of preference metrics that might be used as proxies to measure LLM helpfulness, such as code acceptance or copy rates. To do so, we introduce RealHumanEval, a web interface to measure the ability of LLMs to assist programmers, through either autocomplete or chat support. We conducted a user study (N=213) using RealHumanEval in which users interacted with six LLMs of varying base model performance. Despite static benchmarks not incorporating humans-in-the-loop, we find that improvements in benchmark performance lead to increased programmer productivity; however gaps in benchmark versus human performance are not proportional -- a trend that holds across both forms of LLM support. In contrast, we find that programmer preferences do not correlate with their actual performance, motivating the need for better, human-centric proxy signals. We also open-source RealHumanEval to enable human-centric evaluation of new models and the study data to facilitate efforts to improve code models.
[ { "created": "Wed, 3 Apr 2024 15:20:57 GMT", "version": "v1" } ]
2024-04-04
[ [ "Mozannar", "Hussein", "" ], [ "Chen", "Valerie", "" ], [ "Alsobay", "Mohammed", "" ], [ "Das", "Subhro", "" ], [ "Zhao", "Sebastian", "" ], [ "Wei", "Dennis", "" ], [ "Nagireddy", "Manish", "" ], [ "Sattigeri", "Prasanna", "" ], [ "Talwalkar", "Ameet", "" ], [ "Sontag", "David", "" ] ]
Evaluation of large language models (LLMs) for code has primarily relied on static benchmarks, including HumanEval (Chen et al., 2021), which measure the ability of LLMs to generate complete code that passes unit tests. As LLMs are increasingly used as programmer assistants, we study whether gains on existing benchmarks translate to gains in programmer productivity when coding with LLMs, including time spent coding. In addition to static benchmarks, we investigate the utility of preference metrics that might be used as proxies to measure LLM helpfulness, such as code acceptance or copy rates. To do so, we introduce RealHumanEval, a web interface to measure the ability of LLMs to assist programmers, through either autocomplete or chat support. We conducted a user study (N=213) using RealHumanEval in which users interacted with six LLMs of varying base model performance. Despite static benchmarks not incorporating humans-in-the-loop, we find that improvements in benchmark performance lead to increased programmer productivity; however gaps in benchmark versus human performance are not proportional -- a trend that holds across both forms of LLM support. In contrast, we find that programmer preferences do not correlate with their actual performance, motivating the need for better, human-centric proxy signals. We also open-source RealHumanEval to enable human-centric evaluation of new models and the study data to facilitate efforts to improve code models.
1712.09014
Michael Gagen Dr
M. J. Gagen
Null Dynamical State Models of Human Cognitive Dysfunction
17 pages, 0 figures
null
null
null
cs.AI cs.LG cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The hard problem in artificial intelligence asks how the shuffling of syntactical symbols in a program can lead to systems which experience semantics and qualia. We address this question in three stages. First, we introduce a new class of human semantic symbols which appears when unexpected and drastic environmental change causes humans to become surprised, confused, uncertain, and in extreme cases, unresponsive, passive and dysfunctional. For this class of symbols, pre-learned programs become inoperative so these syntactical programs cannot be the source of experienced qualia. Second, we model the dysfunctional human response to a radically changed environment as being the natural response of any learning machine facing novel inputs from well outside its previous training set. In this situation, learning machines are unable to extract information from their input and will typically enter a dynamical state characterized by null outputs and a lack of response. This state immediately predicts and explains the characteristics of the semantic experiences of humans in similar circumstances. In the third stage, we consider learning machines trained to implement multiple functions in simple sequential programs using environmental data to specify subroutine names, control flow instructions, memory calls, and so on. Drastic change in any of these environmental inputs can again lead to inoperative programs. By examining changes specific to people or locations we can model human cognitive symbols featuring these dependencies, such as attachment and grief. Our approach links known dynamical machines states with human qualia and thus offers new insight into the hard problem of artificial intelligence.
[ { "created": "Mon, 25 Dec 2017 05:46:19 GMT", "version": "v1" } ]
2017-12-27
[ [ "Gagen", "M. J.", "" ] ]
The hard problem in artificial intelligence asks how the shuffling of syntactical symbols in a program can lead to systems which experience semantics and qualia. We address this question in three stages. First, we introduce a new class of human semantic symbols which appears when unexpected and drastic environmental change causes humans to become surprised, confused, uncertain, and in extreme cases, unresponsive, passive and dysfunctional. For this class of symbols, pre-learned programs become inoperative so these syntactical programs cannot be the source of experienced qualia. Second, we model the dysfunctional human response to a radically changed environment as being the natural response of any learning machine facing novel inputs from well outside its previous training set. In this situation, learning machines are unable to extract information from their input and will typically enter a dynamical state characterized by null outputs and a lack of response. This state immediately predicts and explains the characteristics of the semantic experiences of humans in similar circumstances. In the third stage, we consider learning machines trained to implement multiple functions in simple sequential programs using environmental data to specify subroutine names, control flow instructions, memory calls, and so on. Drastic change in any of these environmental inputs can again lead to inoperative programs. By examining changes specific to people or locations we can model human cognitive symbols featuring these dependencies, such as attachment and grief. Our approach links known dynamical machines states with human qualia and thus offers new insight into the hard problem of artificial intelligence.
1006.5927
Debotosh Bhattacharjee
Sandhya Arora, Latesh Malik, Debotosh Bhattacharjee, and Mita Nasipuri
Classification Of Gradient Change Features Using MLP For Handwritten Character Recognition
null
EAIT 2006
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A novel, generic scheme for off-line handwritten English alphabets character images is proposed. The advantage of the technique is that it can be applied in a generic manner to different applications and is expected to perform better in uncertain and noisy environments. The recognition scheme is using a multilayer perceptron(MLP) neural networks. The system was trained and tested on a database of 300 samples of handwritten characters. For improved generalization and to avoid overtraining, the whole available dataset has been divided into two subsets: training set and test set. We achieved 99.10% and 94.15% correct recognition rates on training and test sets respectively. The purposed scheme is robust with respect to various writing styles and size as well as presence of considerable noise.
[ { "created": "Wed, 30 Jun 2010 17:14:40 GMT", "version": "v1" } ]
2010-07-01
[ [ "Arora", "Sandhya", "" ], [ "Malik", "Latesh", "" ], [ "Bhattacharjee", "Debotosh", "" ], [ "Nasipuri", "Mita", "" ] ]
A novel, generic scheme for off-line handwritten English alphabets character images is proposed. The advantage of the technique is that it can be applied in a generic manner to different applications and is expected to perform better in uncertain and noisy environments. The recognition scheme is using a multilayer perceptron(MLP) neural networks. The system was trained and tested on a database of 300 samples of handwritten characters. For improved generalization and to avoid overtraining, the whole available dataset has been divided into two subsets: training set and test set. We achieved 99.10% and 94.15% correct recognition rates on training and test sets respectively. The purposed scheme is robust with respect to various writing styles and size as well as presence of considerable noise.
2404.13179
Zeinab Nezami
Zeinab Nezami, Emmanouil Chaniotakis, Evangelos Pournaras
When Computing follows Vehicles: Decentralized Mobility-Aware Resource Allocation for Edge-to-Cloud Continuum
null
null
null
null
cs.DC cs.MA
http://creativecommons.org/licenses/by/4.0/
The transformation of smart mobility is unprecedented--Autonomous, shared and electric connected vehicles, along with the urgent need to meet ambitious net-zero targets by shifting to low-carbon transport modalities result in new traffic patterns and requirements for real-time computation at large-scale, for instance, augmented reality applications. The cloud computing paradigm can neither respond to such low-latency requirements nor adapt resource allocation to such dynamic spatio-temporal service requests. This paper addresses this grand challenge by introducing a novel decentralized optimization framework for mobility-aware edge-to-cloud resource allocation, service offloading, provisioning and load-balancing. In contrast to related work, this framework comes with superior efficiency and cost-effectiveness under evaluation in real-world traffic settings and mobility datasets. This breakthrough capability of 'computing follows vehicles' proves able to reduce utilization variance by more than 40 times, while preventing service deadline violations by 14%-34%.
[ { "created": "Fri, 19 Apr 2024 21:03:54 GMT", "version": "v1" }, { "created": "Sun, 5 May 2024 16:41:34 GMT", "version": "v2" } ]
2024-05-07
[ [ "Nezami", "Zeinab", "" ], [ "Chaniotakis", "Emmanouil", "" ], [ "Pournaras", "Evangelos", "" ] ]
The transformation of smart mobility is unprecedented--Autonomous, shared and electric connected vehicles, along with the urgent need to meet ambitious net-zero targets by shifting to low-carbon transport modalities result in new traffic patterns and requirements for real-time computation at large-scale, for instance, augmented reality applications. The cloud computing paradigm can neither respond to such low-latency requirements nor adapt resource allocation to such dynamic spatio-temporal service requests. This paper addresses this grand challenge by introducing a novel decentralized optimization framework for mobility-aware edge-to-cloud resource allocation, service offloading, provisioning and load-balancing. In contrast to related work, this framework comes with superior efficiency and cost-effectiveness under evaluation in real-world traffic settings and mobility datasets. This breakthrough capability of 'computing follows vehicles' proves able to reduce utilization variance by more than 40 times, while preventing service deadline violations by 14%-34%.
2104.09827
Sagnik Mukherjee
Jay Mundra, Rohan Gupta, Sagnik Mukherjee
WASSA@IITK at WASSA 2021: Multi-task Learning and Transformer Finetuning for Emotion Classification and Empathy Prediction
Accepted at WASSA-2021, 4 Pages + 1 Page (references)
null
null
null
cs.CL
http://creativecommons.org/licenses/by-nc-sa/4.0/
This paper describes our contribution to the WASSA 2021 shared task on Empathy Prediction and Emotion Classification. The broad goal of this task was to model an empathy score, a distress score and the overall level of emotion of an essay written in response to a newspaper article associated with harm to someone. We have used the ELECTRA model abundantly and also advanced deep learning approaches like multi-task learning. Additionally, we also leveraged standard machine learning techniques like ensembling. Our system achieves a Pearson Correlation Coefficient of 0.533 on sub-task I and a macro F1 score of 0.5528 on sub-task II. We ranked 1st in Emotion Classification sub-task and 3rd in Empathy Prediction sub-task
[ { "created": "Tue, 20 Apr 2021 08:24:10 GMT", "version": "v1" } ]
2021-04-21
[ [ "Mundra", "Jay", "" ], [ "Gupta", "Rohan", "" ], [ "Mukherjee", "Sagnik", "" ] ]
This paper describes our contribution to the WASSA 2021 shared task on Empathy Prediction and Emotion Classification. The broad goal of this task was to model an empathy score, a distress score and the overall level of emotion of an essay written in response to a newspaper article associated with harm to someone. We have used the ELECTRA model abundantly and also advanced deep learning approaches like multi-task learning. Additionally, we also leveraged standard machine learning techniques like ensembling. Our system achieves a Pearson Correlation Coefficient of 0.533 on sub-task I and a macro F1 score of 0.5528 on sub-task II. We ranked 1st in Emotion Classification sub-task and 3rd in Empathy Prediction sub-task
2208.12743
Man Zhang
Man Zhang, Andrea Arcuri, Yonggang Li, Yang Liu, Kaiming Xue
White-box Fuzzing RPC-based APIs with EvoMaster: An Industrial Case Study
null
null
null
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Remote Procedure Call (RPC) is a communication protocol to support client-server interactions among services over a network. RPC is widely applied in industry for building large-scale distributed systems, such as Microservices. Modern RPC frameworks include for example Thrift, gRPC, SOFARPC and Dubbo. Testing such systems is very challenging, due to the complexity of distributed systems and various RPC frameworks the system could employ. To the best of our knowledge, there does not exist any tool or solution that could enable automated testing of modern RPC-based services. To fill this gap, in this paper we propose the first approach in the literature, together with an open-source tool, for white-box fuzzing modern RPC-based APIs with search. To assess our novel approach, we conducted an empirical study with two artificial and four industrial APIs selected by our industrial partner. The tool has been integrated into a real industrial pipeline, and could be applied to real industrial development process for fuzzing RPC-based APIs. To further demonstrate its effectiveness and application in industrial settings, we also report results of employing our tool for fuzzing another 50 industrial APIs autonomously conducted by our industrial partner in their testing processes. Results show that our novel approach is capable of enabling automated test case generation for industrial RPC-based APIs (i.e., two artificial and 54 industrial). We also compared with a simple grey-box technique and existing manually written tests. Our white-box solution achieves significant improvements on code coverage. Regarding fault detection, by conducting a careful review with our industrial partner of the tests generated by our novel approach in the selected four industrial APIs, a total of 41 real faults were identified, which have now been fixed. Another 8,377 detected faults are currently under investigation.
[ { "created": "Fri, 26 Aug 2022 15:54:07 GMT", "version": "v1" }, { "created": "Fri, 3 Feb 2023 10:05:21 GMT", "version": "v2" } ]
2023-02-06
[ [ "Zhang", "Man", "" ], [ "Arcuri", "Andrea", "" ], [ "Li", "Yonggang", "" ], [ "Liu", "Yang", "" ], [ "Xue", "Kaiming", "" ] ]
Remote Procedure Call (RPC) is a communication protocol to support client-server interactions among services over a network. RPC is widely applied in industry for building large-scale distributed systems, such as Microservices. Modern RPC frameworks include for example Thrift, gRPC, SOFARPC and Dubbo. Testing such systems is very challenging, due to the complexity of distributed systems and various RPC frameworks the system could employ. To the best of our knowledge, there does not exist any tool or solution that could enable automated testing of modern RPC-based services. To fill this gap, in this paper we propose the first approach in the literature, together with an open-source tool, for white-box fuzzing modern RPC-based APIs with search. To assess our novel approach, we conducted an empirical study with two artificial and four industrial APIs selected by our industrial partner. The tool has been integrated into a real industrial pipeline, and could be applied to real industrial development process for fuzzing RPC-based APIs. To further demonstrate its effectiveness and application in industrial settings, we also report results of employing our tool for fuzzing another 50 industrial APIs autonomously conducted by our industrial partner in their testing processes. Results show that our novel approach is capable of enabling automated test case generation for industrial RPC-based APIs (i.e., two artificial and 54 industrial). We also compared with a simple grey-box technique and existing manually written tests. Our white-box solution achieves significant improvements on code coverage. Regarding fault detection, by conducting a careful review with our industrial partner of the tests generated by our novel approach in the selected four industrial APIs, a total of 41 real faults were identified, which have now been fixed. Another 8,377 detected faults are currently under investigation.
2302.14299
Andrea Trevi\~no Gavito
Andrea Trevi\~no Gavito, Diego Klabjan, Jean Utke
Gradient-Boosted Based Structured and Unstructured Learning
null
null
null
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose two frameworks to deal with problem settings in which both structured and unstructured data are available. Structured data problems are best solved by traditional machine learning models such as boosting and tree-based algorithms, whereas deep learning has been widely applied to problems dealing with images, text, audio, and other unstructured data sources. However, for the setting in which both structured and unstructured data are accessible, it is not obvious what the best modeling approach is to enhance performance on both data sources simultaneously. Our proposed frameworks allow joint learning on both kinds of data by integrating the paradigms of boosting models and deep neural networks. The first framework, the boosted-feature-vector deep learning network, learns features from the structured data using gradient boosting and combines them with embeddings from unstructured data via a two-branch deep neural network. Secondly, the two-weak-learner boosting framework extends the boosting paradigm to the setting with two input data sources. We present and compare first- and second-order methods of this framework. Our experimental results on both public and real-world datasets show performance gains achieved by the frameworks over selected baselines by magnitudes of 0.1% - 4.7%.
[ { "created": "Tue, 28 Feb 2023 04:16:42 GMT", "version": "v1" } ]
2023-03-01
[ [ "Gavito", "Andrea Treviño", "" ], [ "Klabjan", "Diego", "" ], [ "Utke", "Jean", "" ] ]
We propose two frameworks to deal with problem settings in which both structured and unstructured data are available. Structured data problems are best solved by traditional machine learning models such as boosting and tree-based algorithms, whereas deep learning has been widely applied to problems dealing with images, text, audio, and other unstructured data sources. However, for the setting in which both structured and unstructured data are accessible, it is not obvious what the best modeling approach is to enhance performance on both data sources simultaneously. Our proposed frameworks allow joint learning on both kinds of data by integrating the paradigms of boosting models and deep neural networks. The first framework, the boosted-feature-vector deep learning network, learns features from the structured data using gradient boosting and combines them with embeddings from unstructured data via a two-branch deep neural network. Secondly, the two-weak-learner boosting framework extends the boosting paradigm to the setting with two input data sources. We present and compare first- and second-order methods of this framework. Our experimental results on both public and real-world datasets show performance gains achieved by the frameworks over selected baselines by magnitudes of 0.1% - 4.7%.
2207.14406
Kalyan Veeramachaneni
Kevin Zhang, Neha Patki, Kalyan Veeramachaneni
Sequential Models in the Synthetic Data Vault
17 pages, 8 figures
null
null
null
cs.LG cs.MS
http://creativecommons.org/licenses/by/4.0/
The goal of this paper is to describe a system for generating synthetic sequential data within the Synthetic data vault. To achieve this, we present the Sequential model currently in SDV, an end-to-end framework that builds a generative model for multi-sequence, real-world data. This includes a novel neural network-based machine learning model, conditional probabilistic auto-regressive (CPAR) model. The overall system and the model is available in the open source Synthetic Data Vault (SDV) library {https://github.com/sdv-dev/SDV}, along with a variety of other models for different synthetic data needs. After building the Sequential SDV, we used it to generate synthetic data and compared its quality against an existing, non-sequential generative adversarial network based model called CTGAN. To compare the sequential synthetic data against its real counterpart, we invented a new metric called Multi-Sequence Aggregate Similarity (MSAS). We used it to conclude that our Sequential SDV model learns higher level patterns than non-sequential models without any trade-offs in synthetic data quality.
[ { "created": "Thu, 28 Jul 2022 23:17:51 GMT", "version": "v1" } ]
2022-08-01
[ [ "Zhang", "Kevin", "" ], [ "Patki", "Neha", "" ], [ "Veeramachaneni", "Kalyan", "" ] ]
The goal of this paper is to describe a system for generating synthetic sequential data within the Synthetic data vault. To achieve this, we present the Sequential model currently in SDV, an end-to-end framework that builds a generative model for multi-sequence, real-world data. This includes a novel neural network-based machine learning model, conditional probabilistic auto-regressive (CPAR) model. The overall system and the model is available in the open source Synthetic Data Vault (SDV) library {https://github.com/sdv-dev/SDV}, along with a variety of other models for different synthetic data needs. After building the Sequential SDV, we used it to generate synthetic data and compared its quality against an existing, non-sequential generative adversarial network based model called CTGAN. To compare the sequential synthetic data against its real counterpart, we invented a new metric called Multi-Sequence Aggregate Similarity (MSAS). We used it to conclude that our Sequential SDV model learns higher level patterns than non-sequential models without any trade-offs in synthetic data quality.
1301.4848
Christophe Cruz
Frank Boochs (i3mainz), Andreas Marbs (i3mainz), Hung Truong (i3mainz, Le2i), Helmi Ben Hmida (i3mainz), Ashish Karmacharya (i3mainz, Le2i), Christophe Cruz (Le2i), Adlane Habed (Le2i), Yvon Voisin (Le2i), Christophe Nicolle (Le2i)
Integration of knowledge to support automatic object reconstruction from images and 3D data
null
Systems, Signals and Devices (SSD), 2011 8th International Multi-Conference on, Chemnitz : Germany (2011)
10.1109/SSD.2011.5993558
null
cs.CG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Object reconstruction is an important task in many fields of application as it allows to generate digital representations of our physical world used as base for analysis, planning, construction, visualization or other aims. A reconstruction itself normally is based on reliable data (images, 3D point clouds for example) expressing the object in his complete extent. This data then has to be compiled and analyzed in order to extract all necessary geometrical elements, which represent the object and form a digital copy of it. Traditional strategies are largely based on manual interaction and interpretation, because with increasing complexity of objects human understanding is inevitable to achieve acceptable and reliable results. But human interaction is time consuming and expensive, why many researches has already been invested to use algorithmic support, what allows to speed up the process and to reduce manual work load. Presently most of such supporting algorithms are data-driven and concentate on specific features of the objects, being accessible to numerical models. By means of these models, which normally will represent geometrical (flatness, roughness, for example) or physical features (color, texture), the data is classified and analyzed. This is successful for objects with low complexity, but gets to its limits with increasing complexness of objects. Then purely numerical strategies are not able to sufficiently model the reality. Therefore, the intention of our approach is to take human cognitive strategy as an example, and to simulate extraction processes based on available human defined knowledge for the objects of interest. Such processes will introduce a semantic structure for the objects and guide the algorithms used to detect and recognize objects, which will yield a higher effectiveness. Hence, our research proposes an approach using knowledge to guide the algorithms in 3D point cloud and image processing.
[ { "created": "Mon, 21 Jan 2013 12:42:54 GMT", "version": "v1" } ]
2013-01-22
[ [ "Boochs", "Frank", "", "i3mainz" ], [ "Marbs", "Andreas", "", "i3mainz" ], [ "Truong", "Hung", "", "i3mainz,\n Le2i" ], [ "Hmida", "Helmi Ben", "", "i3mainz" ], [ "Karmacharya", "Ashish", "", "i3mainz, Le2i" ], [ "Cruz", "Christophe", "", "Le2i" ], [ "Habed", "Adlane", "", "Le2i" ], [ "Voisin", "Yvon", "", "Le2i" ], [ "Nicolle", "Christophe", "", "Le2i" ] ]
Object reconstruction is an important task in many fields of application as it allows to generate digital representations of our physical world used as base for analysis, planning, construction, visualization or other aims. A reconstruction itself normally is based on reliable data (images, 3D point clouds for example) expressing the object in his complete extent. This data then has to be compiled and analyzed in order to extract all necessary geometrical elements, which represent the object and form a digital copy of it. Traditional strategies are largely based on manual interaction and interpretation, because with increasing complexity of objects human understanding is inevitable to achieve acceptable and reliable results. But human interaction is time consuming and expensive, why many researches has already been invested to use algorithmic support, what allows to speed up the process and to reduce manual work load. Presently most of such supporting algorithms are data-driven and concentate on specific features of the objects, being accessible to numerical models. By means of these models, which normally will represent geometrical (flatness, roughness, for example) or physical features (color, texture), the data is classified and analyzed. This is successful for objects with low complexity, but gets to its limits with increasing complexness of objects. Then purely numerical strategies are not able to sufficiently model the reality. Therefore, the intention of our approach is to take human cognitive strategy as an example, and to simulate extraction processes based on available human defined knowledge for the objects of interest. Such processes will introduce a semantic structure for the objects and guide the algorithms used to detect and recognize objects, which will yield a higher effectiveness. Hence, our research proposes an approach using knowledge to guide the algorithms in 3D point cloud and image processing.
2104.01744
Immanuel Trummer Mr.
Junxiong Wang and Immanuel Trummer and Debabrota Basu
UDO: Universal Database Optimization using Reinforcement Learning
null
null
null
null
cs.DB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
UDO is a versatile tool for offline tuning of database systems for specific workloads. UDO can consider a variety of tuning choices, reaching from picking transaction code variants over index selections up to database system parameter tuning. UDO uses reinforcement learning to converge to near-optimal configurations, creating and evaluating different configurations via actual query executions (instead of relying on simplifying cost models). To cater to different parameter types, UDO distinguishes heavy parameters (which are expensive to change, e.g. physical design parameters) from light parameters. Specifically for optimizing heavy parameters, UDO uses reinforcement learning algorithms that allow delaying the point at which the reward feedback becomes available. This gives us the freedom to optimize the point in time and the order in which different configurations are created and evaluated (by benchmarking a workload sample). UDO uses a cost-based planner to minimize reconfiguration overheads. For instance, it aims to amortize the creation of expensive data structures by consecutively evaluating configurations using them. We evaluate UDO on Postgres as well as MySQL and on TPC-H as well as TPC-C, optimizing a variety of light and heavy parameters concurrently.
[ { "created": "Mon, 5 Apr 2021 02:40:38 GMT", "version": "v1" }, { "created": "Thu, 26 Aug 2021 14:46:08 GMT", "version": "v2" } ]
2021-08-27
[ [ "Wang", "Junxiong", "" ], [ "Trummer", "Immanuel", "" ], [ "Basu", "Debabrota", "" ] ]
UDO is a versatile tool for offline tuning of database systems for specific workloads. UDO can consider a variety of tuning choices, reaching from picking transaction code variants over index selections up to database system parameter tuning. UDO uses reinforcement learning to converge to near-optimal configurations, creating and evaluating different configurations via actual query executions (instead of relying on simplifying cost models). To cater to different parameter types, UDO distinguishes heavy parameters (which are expensive to change, e.g. physical design parameters) from light parameters. Specifically for optimizing heavy parameters, UDO uses reinforcement learning algorithms that allow delaying the point at which the reward feedback becomes available. This gives us the freedom to optimize the point in time and the order in which different configurations are created and evaluated (by benchmarking a workload sample). UDO uses a cost-based planner to minimize reconfiguration overheads. For instance, it aims to amortize the creation of expensive data structures by consecutively evaluating configurations using them. We evaluate UDO on Postgres as well as MySQL and on TPC-H as well as TPC-C, optimizing a variety of light and heavy parameters concurrently.
1801.03291
Benjamin Sliwa
Benjamin Sliwa and Marcus Haferkamp and Manar Al-Askary and Dennis Dorn and Christian Wietfeld
A Radio-fingerprinting-based Vehicle Classification System for Intelligent Traffic Control in Smart Cities
null
2018 Annual IEEE International Systems Conference (SysCon)
10.1109/SYSCON.2018.8369511
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The measurement and provision of precise and upto-date traffic-related key performance indicators is a key element and crucial factor for intelligent traffic controls systems in upcoming smart cities. The street network is considered as a highly-dynamic Cyber Physical System (CPS) where measured information forms the foundation for dynamic control methods aiming to optimize the overall system state. Apart from global system parameters like traffic flow and density, specific data such as velocity of individual vehicles as well as vehicle type information can be leveraged for highly sophisticated traffic control methods like dynamic type-specific lane assignments. Consequently, solutions for acquiring these kinds of information are required and have to comply with strict requirements ranging from accuracy over cost-efficiency to privacy preservation. In this paper, we present a system for classifying vehicles based on their radio-fingerprint. In contrast to other approaches, the proposed system is able to provide real-time capable and precise vehicle classification as well as cost-efficient installation and maintenance, privacy preservation and weather independence. The system performance in terms of accuracy and resource-efficiency is evaluated in the field using comprehensive measurements. Using a machine learning based approach, the resulting success ratio for classifying cars and trucks is above 99%.
[ { "created": "Wed, 10 Jan 2018 10:29:23 GMT", "version": "v1" }, { "created": "Tue, 12 Jun 2018 05:22:36 GMT", "version": "v2" } ]
2018-06-13
[ [ "Sliwa", "Benjamin", "" ], [ "Haferkamp", "Marcus", "" ], [ "Al-Askary", "Manar", "" ], [ "Dorn", "Dennis", "" ], [ "Wietfeld", "Christian", "" ] ]
The measurement and provision of precise and upto-date traffic-related key performance indicators is a key element and crucial factor for intelligent traffic controls systems in upcoming smart cities. The street network is considered as a highly-dynamic Cyber Physical System (CPS) where measured information forms the foundation for dynamic control methods aiming to optimize the overall system state. Apart from global system parameters like traffic flow and density, specific data such as velocity of individual vehicles as well as vehicle type information can be leveraged for highly sophisticated traffic control methods like dynamic type-specific lane assignments. Consequently, solutions for acquiring these kinds of information are required and have to comply with strict requirements ranging from accuracy over cost-efficiency to privacy preservation. In this paper, we present a system for classifying vehicles based on their radio-fingerprint. In contrast to other approaches, the proposed system is able to provide real-time capable and precise vehicle classification as well as cost-efficient installation and maintenance, privacy preservation and weather independence. The system performance in terms of accuracy and resource-efficiency is evaluated in the field using comprehensive measurements. Using a machine learning based approach, the resulting success ratio for classifying cars and trucks is above 99%.
2403.03691
Yufan Chen
Yufan Chen, Ching Ting Leung, Yong Huang, Jianwei Sun, Hao Chen, Hanyu Gao
MolNexTR: A Generalized Deep Learning Model for Molecular Image Recognition
Submitted to the Journal of Cheminformatics
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by/4.0/
In the field of chemical structure recognition, the task of converting molecular images into graph structures and SMILES string stands as a significant challenge, primarily due to the varied drawing styles and conventions prevalent in chemical literature. To bridge this gap, we proposed MolNexTR, a novel image-to-graph deep learning model that collaborates to fuse the strengths of ConvNext, a powerful Convolutional Neural Network variant, and Vision-TRansformer. This integration facilitates a more nuanced extraction of both local and global features from molecular images. MolNexTR can predict atoms and bonds simultaneously and understand their layout rules. It also excels at flexibly integrating symbolic chemistry principles to discern chirality and decipher abbreviated structures. We further incorporate a series of advanced algorithms, including improved data augmentation module, image contamination module, and a post-processing module to get the final SMILES output. These modules synergistically enhance the model's robustness against the diverse styles of molecular imagery found in real literature. In our test sets, MolNexTR has demonstrated superior performance, achieving an accuracy rate of 81-97%, marking a significant advancement in the domain of molecular structure recognition. Scientific contribution: MolNexTR is a novel image-to-graph model that incorporates a unique dual-stream encoder to extract complex molecular image features, and combines chemical rules to predict atoms and bonds while understanding atom and bond layout rules. In addition, it employs a series of novel augmentation algorithms to significantly enhance the robustness and performance of the model.
[ { "created": "Wed, 6 Mar 2024 13:17:41 GMT", "version": "v1" }, { "created": "Fri, 8 Mar 2024 06:32:12 GMT", "version": "v2" } ]
2024-03-11
[ [ "Chen", "Yufan", "" ], [ "Leung", "Ching Ting", "" ], [ "Huang", "Yong", "" ], [ "Sun", "Jianwei", "" ], [ "Chen", "Hao", "" ], [ "Gao", "Hanyu", "" ] ]
In the field of chemical structure recognition, the task of converting molecular images into graph structures and SMILES string stands as a significant challenge, primarily due to the varied drawing styles and conventions prevalent in chemical literature. To bridge this gap, we proposed MolNexTR, a novel image-to-graph deep learning model that collaborates to fuse the strengths of ConvNext, a powerful Convolutional Neural Network variant, and Vision-TRansformer. This integration facilitates a more nuanced extraction of both local and global features from molecular images. MolNexTR can predict atoms and bonds simultaneously and understand their layout rules. It also excels at flexibly integrating symbolic chemistry principles to discern chirality and decipher abbreviated structures. We further incorporate a series of advanced algorithms, including improved data augmentation module, image contamination module, and a post-processing module to get the final SMILES output. These modules synergistically enhance the model's robustness against the diverse styles of molecular imagery found in real literature. In our test sets, MolNexTR has demonstrated superior performance, achieving an accuracy rate of 81-97%, marking a significant advancement in the domain of molecular structure recognition. Scientific contribution: MolNexTR is a novel image-to-graph model that incorporates a unique dual-stream encoder to extract complex molecular image features, and combines chemical rules to predict atoms and bonds while understanding atom and bond layout rules. In addition, it employs a series of novel augmentation algorithms to significantly enhance the robustness and performance of the model.
2201.07409
Haoran Yang
Haoran Yang, Hongxu Chen, Shirui Pan, Lin Li, Philip S. Yu, Guandong Xu
Dual Space Graph Contrastive Learning
null
null
10.1145/3485447.3512211
null
cs.LG cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Unsupervised graph representation learning has emerged as a powerful tool to address real-world problems and achieves huge success in the graph learning domain. Graph contrastive learning is one of the unsupervised graph representation learning methods, which recently attracts attention from researchers and has achieved state-of-the-art performances on various tasks. The key to the success of graph contrastive learning is to construct proper contrasting pairs to acquire the underlying structural semantics of the graph. However, this key part is not fully explored currently, most of the ways generating contrasting pairs focus on augmenting or perturbating graph structures to obtain different views of the input graph. But such strategies could degrade the performances via adding noise into the graph, which may narrow down the field of the applications of graph contrastive learning. In this paper, we propose a novel graph contrastive learning method, namely \textbf{D}ual \textbf{S}pace \textbf{G}raph \textbf{C}ontrastive (DSGC) Learning, to conduct graph contrastive learning among views generated in different spaces including the hyperbolic space and the Euclidean space. Since both spaces have their own advantages to represent graph data in the embedding spaces, we hope to utilize graph contrastive learning to bridge the spaces and leverage advantages from both sides. The comparison experiment results show that DSGC achieves competitive or better performances among all the datasets. In addition, we conduct extensive experiments to analyze the impact of different graph encoders on DSGC, giving insights about how to better leverage the advantages of contrastive learning between different spaces.
[ { "created": "Wed, 19 Jan 2022 04:10:29 GMT", "version": "v1" }, { "created": "Fri, 4 Mar 2022 20:09:47 GMT", "version": "v2" } ]
2022-03-08
[ [ "Yang", "Haoran", "" ], [ "Chen", "Hongxu", "" ], [ "Pan", "Shirui", "" ], [ "Li", "Lin", "" ], [ "Yu", "Philip S.", "" ], [ "Xu", "Guandong", "" ] ]
Unsupervised graph representation learning has emerged as a powerful tool to address real-world problems and achieves huge success in the graph learning domain. Graph contrastive learning is one of the unsupervised graph representation learning methods, which recently attracts attention from researchers and has achieved state-of-the-art performances on various tasks. The key to the success of graph contrastive learning is to construct proper contrasting pairs to acquire the underlying structural semantics of the graph. However, this key part is not fully explored currently, most of the ways generating contrasting pairs focus on augmenting or perturbating graph structures to obtain different views of the input graph. But such strategies could degrade the performances via adding noise into the graph, which may narrow down the field of the applications of graph contrastive learning. In this paper, we propose a novel graph contrastive learning method, namely \textbf{D}ual \textbf{S}pace \textbf{G}raph \textbf{C}ontrastive (DSGC) Learning, to conduct graph contrastive learning among views generated in different spaces including the hyperbolic space and the Euclidean space. Since both spaces have their own advantages to represent graph data in the embedding spaces, we hope to utilize graph contrastive learning to bridge the spaces and leverage advantages from both sides. The comparison experiment results show that DSGC achieves competitive or better performances among all the datasets. In addition, we conduct extensive experiments to analyze the impact of different graph encoders on DSGC, giving insights about how to better leverage the advantages of contrastive learning between different spaces.
2403.02093
Dominik Scheinert
Benjamin J. J. Pfister and Dominik Scheinert and Morgan K. Geldenhuys and Odej Kao
Daedalus: Self-Adaptive Horizontal Autoscaling for Resource Efficiency of Distributed Stream Processing Systems
12 pages, 11 figures, 1 table
null
null
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Distributed Stream Processing (DSP) systems are capable of processing large streams of unbounded data, offering high throughput and low latencies. To maintain a stable Quality of Service (QoS), these systems require a sufficient allocation of resources. At the same time, over-provisioning can result in wasted energy and high operating costs. Therefore, to maximize resource utilization, autoscaling methods have been proposed that aim to efficiently match the resource allocation with the incoming workload. However, determining when and by how much to scale remains a significant challenge. Given the long-running nature of DSP jobs, scaling actions need to be executed at runtime, and to maintain a good QoS, they should be both accurate and infrequent. To address the challenges of autoscaling, the concept of self-adaptive systems is particularly fitting. These systems monitor themselves and their environment, adapting to changes with minimal need for expert involvement. This paper introduces Daedalus, a self-adaptive manager for autoscaling in DSP systems, which draws on the principles of self-adaption to address the challenge of efficient autoscaling. Daedalus monitors a running DSP job and builds performance models, aiming to predict the maximum processing capacity at different scale-outs. When combined with time series forecasting to predict future workloads, Daedalus proactively scales DSP jobs, optimizing for maximum throughput and minimizing both latencies and resource usage. We conducted experiments using Apache Flink and Kafka Streams to evaluate the performance of Daedalus against two state-of-the-art approaches. Daedalus was able to achieve comparable latencies while reducing resource usage by up to 71%.
[ { "created": "Mon, 4 Mar 2024 14:53:50 GMT", "version": "v1" }, { "created": "Tue, 5 Mar 2024 10:31:37 GMT", "version": "v2" } ]
2024-03-06
[ [ "Pfister", "Benjamin J. J.", "" ], [ "Scheinert", "Dominik", "" ], [ "Geldenhuys", "Morgan K.", "" ], [ "Kao", "Odej", "" ] ]
Distributed Stream Processing (DSP) systems are capable of processing large streams of unbounded data, offering high throughput and low latencies. To maintain a stable Quality of Service (QoS), these systems require a sufficient allocation of resources. At the same time, over-provisioning can result in wasted energy and high operating costs. Therefore, to maximize resource utilization, autoscaling methods have been proposed that aim to efficiently match the resource allocation with the incoming workload. However, determining when and by how much to scale remains a significant challenge. Given the long-running nature of DSP jobs, scaling actions need to be executed at runtime, and to maintain a good QoS, they should be both accurate and infrequent. To address the challenges of autoscaling, the concept of self-adaptive systems is particularly fitting. These systems monitor themselves and their environment, adapting to changes with minimal need for expert involvement. This paper introduces Daedalus, a self-adaptive manager for autoscaling in DSP systems, which draws on the principles of self-adaption to address the challenge of efficient autoscaling. Daedalus monitors a running DSP job and builds performance models, aiming to predict the maximum processing capacity at different scale-outs. When combined with time series forecasting to predict future workloads, Daedalus proactively scales DSP jobs, optimizing for maximum throughput and minimizing both latencies and resource usage. We conducted experiments using Apache Flink and Kafka Streams to evaluate the performance of Daedalus against two state-of-the-art approaches. Daedalus was able to achieve comparable latencies while reducing resource usage by up to 71%.
1911.10501
Qifu Sun
Rina Su, Qifu Tyler Sun, Zhongshan Zhang
Delay-Complexity Trade-off of Random Linear Network Coding in Wireless Broadcast
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In wireless broadcast, random linear network coding (RLNC) over GF(2^L) is known to asymptotically achieve the optimal completion delay with increasing L. However, the high decoding complexity hinders the potential applicability of RLNC schemes over large GF(2^L). In this paper, a comprehensive analysis of completion delay and decoding complexity is conducted for field-based systematic RLNC schemes in wireless broadcast. In particular, we prove that the RLNC scheme over GF(2) can also asymptotically approach the optimal completion delay per packet when the packet number goes to infinity. Moreover, we introduce a new method, based on circular-shift operations, to design RLNC schemes which avoid multiplications over large GF(2^L). Based on both theoretical and numerical analyses, the new RLNC schemes turn out to have a much better trade-off between completion delay and decoding complexity. In particular, numerical results demonstrate that the proposed schemes can attain average completion delay just within 5% higher than the optimal one, while the decoding complexity is only about 3 times the one of the RLNC scheme over GF(2).
[ { "created": "Sun, 24 Nov 2019 11:07:20 GMT", "version": "v1" }, { "created": "Sat, 16 May 2020 01:20:07 GMT", "version": "v2" } ]
2020-05-19
[ [ "Su", "Rina", "" ], [ "Sun", "Qifu Tyler", "" ], [ "Zhang", "Zhongshan", "" ] ]
In wireless broadcast, random linear network coding (RLNC) over GF(2^L) is known to asymptotically achieve the optimal completion delay with increasing L. However, the high decoding complexity hinders the potential applicability of RLNC schemes over large GF(2^L). In this paper, a comprehensive analysis of completion delay and decoding complexity is conducted for field-based systematic RLNC schemes in wireless broadcast. In particular, we prove that the RLNC scheme over GF(2) can also asymptotically approach the optimal completion delay per packet when the packet number goes to infinity. Moreover, we introduce a new method, based on circular-shift operations, to design RLNC schemes which avoid multiplications over large GF(2^L). Based on both theoretical and numerical analyses, the new RLNC schemes turn out to have a much better trade-off between completion delay and decoding complexity. In particular, numerical results demonstrate that the proposed schemes can attain average completion delay just within 5% higher than the optimal one, while the decoding complexity is only about 3 times the one of the RLNC scheme over GF(2).
1406.5266
Yaniv Taigman
Yaniv Taigman, Ming Yang, Marc'Aurelio Ranzato, Lior Wolf
Web-Scale Training for Face Identification
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/3.0/
Scaling machine learning methods to very large datasets has attracted considerable attention in recent years, thanks to easy access to ubiquitous sensing and data from the web. We study face recognition and show that three distinct properties have surprising effects on the transferability of deep convolutional networks (CNN): (1) The bottleneck of the network serves as an important transfer learning regularizer, and (2) in contrast to the common wisdom, performance saturation may exist in CNN's (as the number of training samples grows); we propose a solution for alleviating this by replacing the naive random subsampling of the training set with a bootstrapping process. Moreover, (3) we find a link between the representation norm and the ability to discriminate in a target domain, which sheds lights on how such networks represent faces. Based on these discoveries, we are able to improve face recognition accuracy on the widely used LFW benchmark, both in the verification (1:1) and identification (1:N) protocols, and directly compare, for the first time, with the state of the art Commercially-Off-The-Shelf system and show a sizable leap in performance.
[ { "created": "Fri, 20 Jun 2014 02:51:31 GMT", "version": "v1" }, { "created": "Sat, 18 Apr 2015 09:18:19 GMT", "version": "v2" } ]
2015-04-21
[ [ "Taigman", "Yaniv", "" ], [ "Yang", "Ming", "" ], [ "Ranzato", "Marc'Aurelio", "" ], [ "Wolf", "Lior", "" ] ]
Scaling machine learning methods to very large datasets has attracted considerable attention in recent years, thanks to easy access to ubiquitous sensing and data from the web. We study face recognition and show that three distinct properties have surprising effects on the transferability of deep convolutional networks (CNN): (1) The bottleneck of the network serves as an important transfer learning regularizer, and (2) in contrast to the common wisdom, performance saturation may exist in CNN's (as the number of training samples grows); we propose a solution for alleviating this by replacing the naive random subsampling of the training set with a bootstrapping process. Moreover, (3) we find a link between the representation norm and the ability to discriminate in a target domain, which sheds lights on how such networks represent faces. Based on these discoveries, we are able to improve face recognition accuracy on the widely used LFW benchmark, both in the verification (1:1) and identification (1:N) protocols, and directly compare, for the first time, with the state of the art Commercially-Off-The-Shelf system and show a sizable leap in performance.
2301.02889
Zirou Qiu
Zirou Qiu, Chen Chen, Madhav V. Marathe, S. S. Ravi, Daniel J. Rosenkrantz, Richard E. Stearns, Anil Vullikanti
Networked Anti-Coordination Games Meet Graphical Dynamical Systems: Equilibria and Convergence
Accepted at AAAI-23
null
null
null
cs.GT
http://creativecommons.org/licenses/by/4.0/
Evolutionary anti-coordination games on networks capture real-world strategic situations such as traffic routing and market competition. In such games, agents maximize their utility by choosing actions that differ from their neighbors' actions. Two important problems concerning evolutionary games are the existence of a pure Nash equilibrium (NE) and the convergence time of the dynamics. In this work, we study these two problems for anti-coordination games under sequential and synchronous update schemes. For each update scheme, we examine two decision modes based on whether an agent considers its own previous action (self essential ) or not (self non-essential ) in choosing its next action. Using a relationship between games and dynamical systems, we show that for both update schemes, finding an NE can be done efficiently under the self non-essential mode but is computationally intractable under the self essential mode. To cope with this hardness, we identify special cases for which an NE can be obtained efficiently. For convergence time, we show that the best-response dynamics converges in a polynomial number of steps in the synchronous scheme for both modes; for the sequential scheme, the convergence time is polynomial only under the self non-essential mode. Through experiments, we empirically examine the convergence time and the equilibria for both synthetic and real-world networks.
[ { "created": "Sat, 7 Jan 2023 16:32:22 GMT", "version": "v1" }, { "created": "Fri, 3 Mar 2023 18:25:09 GMT", "version": "v2" }, { "created": "Fri, 8 Dec 2023 19:32:30 GMT", "version": "v3" }, { "created": "Sun, 18 Feb 2024 19:22:12 GMT", "version": "v4" }, { "created": "Fri, 29 Mar 2024 19:19:57 GMT", "version": "v5" } ]
2024-04-02
[ [ "Qiu", "Zirou", "" ], [ "Chen", "Chen", "" ], [ "Marathe", "Madhav V.", "" ], [ "Ravi", "S. S.", "" ], [ "Rosenkrantz", "Daniel J.", "" ], [ "Stearns", "Richard E.", "" ], [ "Vullikanti", "Anil", "" ] ]
Evolutionary anti-coordination games on networks capture real-world strategic situations such as traffic routing and market competition. In such games, agents maximize their utility by choosing actions that differ from their neighbors' actions. Two important problems concerning evolutionary games are the existence of a pure Nash equilibrium (NE) and the convergence time of the dynamics. In this work, we study these two problems for anti-coordination games under sequential and synchronous update schemes. For each update scheme, we examine two decision modes based on whether an agent considers its own previous action (self essential ) or not (self non-essential ) in choosing its next action. Using a relationship between games and dynamical systems, we show that for both update schemes, finding an NE can be done efficiently under the self non-essential mode but is computationally intractable under the self essential mode. To cope with this hardness, we identify special cases for which an NE can be obtained efficiently. For convergence time, we show that the best-response dynamics converges in a polynomial number of steps in the synchronous scheme for both modes; for the sequential scheme, the convergence time is polynomial only under the self non-essential mode. Through experiments, we empirically examine the convergence time and the equilibria for both synthetic and real-world networks.
2203.12590
Jaeun Phyo
Jauen Phyo, Wonjun Ko, Eunjin Jeon, and Heung-Il Suk
TransSleep: Transitioning-aware Attention-based Deep Neural Network for Sleep Staging
13 pages, 9 figures
null
null
null
cs.LG cs.AI cs.HC eess.SP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Sleep staging is essential for sleep assessment and plays a vital role as a health indicator. Many recent studies have devised various machine learning as well as deep learning architectures for sleep staging. However, two key challenges hinder the practical use of these architectures: effectively capturing salient waveforms in sleep signals and correctly classifying confusing stages in transitioning epochs. In this study, we propose a novel deep neural network structure, TransSleep, that captures distinctive local temporal patterns and distinguishes confusing stages using two auxiliary tasks. In particular, TransSleep adopts an attention-based multi-scale feature extractor module to capture salient waveforms; a stage-confusion estimator module with a novel auxiliary task, epoch-level stage classification, to estimate confidence scores for identifying confusing stages; and a context encoder module with the other novel auxiliary task, stage-transition detection, to represent contextual relationships across neighboring epochs. Results show that TransSleep achieves promising performance in automatic sleep staging. The validity of TransSleep is demonstrated by its state-of-the-art performance on two publicly available datasets, Sleep-EDF and MASS. Furthermore, we performed ablations to analyze our results from different perspectives. Based on our overall results, we believe that TransSleep has immense potential to provide new insights into deep learning-based sleep staging.
[ { "created": "Tue, 22 Mar 2022 08:55:32 GMT", "version": "v1" } ]
2022-03-24
[ [ "Phyo", "Jauen", "" ], [ "Ko", "Wonjun", "" ], [ "Jeon", "Eunjin", "" ], [ "Suk", "Heung-Il", "" ] ]
Sleep staging is essential for sleep assessment and plays a vital role as a health indicator. Many recent studies have devised various machine learning as well as deep learning architectures for sleep staging. However, two key challenges hinder the practical use of these architectures: effectively capturing salient waveforms in sleep signals and correctly classifying confusing stages in transitioning epochs. In this study, we propose a novel deep neural network structure, TransSleep, that captures distinctive local temporal patterns and distinguishes confusing stages using two auxiliary tasks. In particular, TransSleep adopts an attention-based multi-scale feature extractor module to capture salient waveforms; a stage-confusion estimator module with a novel auxiliary task, epoch-level stage classification, to estimate confidence scores for identifying confusing stages; and a context encoder module with the other novel auxiliary task, stage-transition detection, to represent contextual relationships across neighboring epochs. Results show that TransSleep achieves promising performance in automatic sleep staging. The validity of TransSleep is demonstrated by its state-of-the-art performance on two publicly available datasets, Sleep-EDF and MASS. Furthermore, we performed ablations to analyze our results from different perspectives. Based on our overall results, we believe that TransSleep has immense potential to provide new insights into deep learning-based sleep staging.
2303.16753
Peiyu Liu
Peiyu Liu, Ze-Feng Gao, Yushuo Chen, Wayne Xin Zhao, Ji-Rong Wen
Scaling Pre-trained Language Models to Deeper via Parameter-efficient Architecture
14 pages, 4 figures, 6 tables
null
null
null
cs.CL
http://creativecommons.org/publicdomain/zero/1.0/
In this paper, we propose a highly parameter-efficient approach to scaling pre-trained language models (PLMs) to a deeper model depth. Unlike prior work that shares all parameters or uses extra blocks, we design a more capable parameter-sharing architecture based on matrix product operator (MPO). MPO decomposition can reorganize and factorize the information of a parameter matrix into two parts: the major part that contains the major information (central tensor) and the supplementary part that only has a small proportion of parameters (auxiliary tensors). Based on such a decomposition, our architecture shares the central tensor across all layers for reducing the model size and meanwhile keeps layer-specific auxiliary tensors (also using adapters) for enhancing the adaptation flexibility. To improve the model training, we further propose a stable initialization algorithm tailored for the MPO-based architecture. Extensive experiments have demonstrated the effectiveness of our proposed model in reducing the model size and achieving highly competitive performance.
[ { "created": "Mon, 27 Mar 2023 02:34:09 GMT", "version": "v1" }, { "created": "Tue, 11 Apr 2023 02:45:10 GMT", "version": "v2" } ]
2023-04-12
[ [ "Liu", "Peiyu", "" ], [ "Gao", "Ze-Feng", "" ], [ "Chen", "Yushuo", "" ], [ "Zhao", "Wayne Xin", "" ], [ "Wen", "Ji-Rong", "" ] ]
In this paper, we propose a highly parameter-efficient approach to scaling pre-trained language models (PLMs) to a deeper model depth. Unlike prior work that shares all parameters or uses extra blocks, we design a more capable parameter-sharing architecture based on matrix product operator (MPO). MPO decomposition can reorganize and factorize the information of a parameter matrix into two parts: the major part that contains the major information (central tensor) and the supplementary part that only has a small proportion of parameters (auxiliary tensors). Based on such a decomposition, our architecture shares the central tensor across all layers for reducing the model size and meanwhile keeps layer-specific auxiliary tensors (also using adapters) for enhancing the adaptation flexibility. To improve the model training, we further propose a stable initialization algorithm tailored for the MPO-based architecture. Extensive experiments have demonstrated the effectiveness of our proposed model in reducing the model size and achieving highly competitive performance.
1901.06441
Kenneth S. Palacio-Baus
Kenneth Palacio-Baus and Natasha Devroye
Achievable Error Exponents of One-Way and Two-Way AWGN Channels
46 pages, 18 figures, Submitted to IEEE Transactions on Information Theory
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Achievable error exponents for the one-way with noisy feedback and two-way AWGN channels are derived for the transmission of a finite number of messages $M$ using fixed block length $n$, under the almost sure (AS) and the expected block (EXP) power constraints. In the one-way setting under noisy AWGN feedback, it is shown that under the AS constraint and when the feedback link is much stronger than the direct link, active feedback leads to a larger gain over the non-feedback error exponent than passive feedback. Under the EXP constraint, a previously known error exponent for the transmission of two messages is generalized to any arbitrary but finite number of messages $M$. In the two-way setting, where each user has its own message to send in addition to (possibly) aiding in the transmission of feedback for the opposite direction, error exponent regions are defined and derived for the first time for the AWGN two-way channel under both AS and EXP power constraints. It is shown that feedback or interaction may lead to error exponent gains in one direction, possibly at the expense of a decrease in the error exponents attained in the other direction. The relationship between $M$ and $n$ supported by our achievability strategies is explored.
[ { "created": "Fri, 18 Jan 2019 23:27:43 GMT", "version": "v1" } ]
2019-01-23
[ [ "Palacio-Baus", "Kenneth", "" ], [ "Devroye", "Natasha", "" ] ]
Achievable error exponents for the one-way with noisy feedback and two-way AWGN channels are derived for the transmission of a finite number of messages $M$ using fixed block length $n$, under the almost sure (AS) and the expected block (EXP) power constraints. In the one-way setting under noisy AWGN feedback, it is shown that under the AS constraint and when the feedback link is much stronger than the direct link, active feedback leads to a larger gain over the non-feedback error exponent than passive feedback. Under the EXP constraint, a previously known error exponent for the transmission of two messages is generalized to any arbitrary but finite number of messages $M$. In the two-way setting, where each user has its own message to send in addition to (possibly) aiding in the transmission of feedback for the opposite direction, error exponent regions are defined and derived for the first time for the AWGN two-way channel under both AS and EXP power constraints. It is shown that feedback or interaction may lead to error exponent gains in one direction, possibly at the expense of a decrease in the error exponents attained in the other direction. The relationship between $M$ and $n$ supported by our achievability strategies is explored.
1812.02953
Han Yu
Han Yu, Zhiqi Shen, Chunyan Miao, Cyril Leung, Victor R. Lesser and Qiang Yang
Building Ethics into Artificial Intelligence
null
H. Yu, Z. Shen, C. Miao, C. Leung, V. R. Lesser & Q. Yang, "Building Ethics into Artificial Intelligence," in Proceedings of the 27th International Joint Conference on Artificial Intelligence (IJCAI'18), pp. 5527-5533, 2018
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
As artificial intelligence (AI) systems become increasingly ubiquitous, the topic of AI governance for ethical decision-making by AI has captured public imagination. Within the AI research community, this topic remains less familiar to many researchers. In this paper, we complement existing surveys, which largely focused on the psychological, social and legal discussions of the topic, with an analysis of recent advances in technical solutions for AI governance. By reviewing publications in leading AI conferences including AAAI, AAMAS, ECAI and IJCAI, we propose a taxonomy which divides the field into four areas: 1) exploring ethical dilemmas; 2) individual ethical decision frameworks; 3) collective ethical decision frameworks; and 4) ethics in human-AI interactions. We highlight the intuitions and key techniques used in each approach, and discuss promising future research directions towards successful integration of ethical AI systems into human societies.
[ { "created": "Fri, 7 Dec 2018 09:18:01 GMT", "version": "v1" } ]
2018-12-10
[ [ "Yu", "Han", "" ], [ "Shen", "Zhiqi", "" ], [ "Miao", "Chunyan", "" ], [ "Leung", "Cyril", "" ], [ "Lesser", "Victor R.", "" ], [ "Yang", "Qiang", "" ] ]
As artificial intelligence (AI) systems become increasingly ubiquitous, the topic of AI governance for ethical decision-making by AI has captured public imagination. Within the AI research community, this topic remains less familiar to many researchers. In this paper, we complement existing surveys, which largely focused on the psychological, social and legal discussions of the topic, with an analysis of recent advances in technical solutions for AI governance. By reviewing publications in leading AI conferences including AAAI, AAMAS, ECAI and IJCAI, we propose a taxonomy which divides the field into four areas: 1) exploring ethical dilemmas; 2) individual ethical decision frameworks; 3) collective ethical decision frameworks; and 4) ethics in human-AI interactions. We highlight the intuitions and key techniques used in each approach, and discuss promising future research directions towards successful integration of ethical AI systems into human societies.
2406.19433
Armin Namavari
Armin Namavari, Barry Wang, Sanketh Menda, Ben Nassi, Nirvan Tyagi, James Grimmelmann, Amy Zhang, Thomas Ristenpart
Private Hierarchical Governance for Encrypted Messaging
Published in IEEE Security and Privacy 2024
null
10.1109/SP54263.2024.00235
null
cs.CR cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The increasing harms caused by hate, harassment, and other forms of abuse online have motivated major platforms to explore hierarchical governance. The idea is to allow communities to have designated members take on moderation and leadership duties; meanwhile, members can still escalate issues to the platform. But these promising approaches have only been explored in plaintext settings where community content is public to the platform. It is unclear how one can realize hierarchical governance in the huge and increasing number of online communities that utilize end-to-end encrypted (E2EE) messaging for privacy. We propose private hierarchical governance systems. These should enable similar levels of community governance as in plaintext settings, while maintaining cryptographic privacy of content and governance actions not reported to the platform. We design the first such system, taking a layered approach that adds governance logic on top of an encrypted messaging protocol; we show how an extension to the message layer security (MLS) protocol suffices for achieving a rich set of governance policies. Our approach allows developers to rapidly prototype new governance features, taking inspiration from a plaintext system called PolicyKit. We build a prototype E2EE messaging system called MlsGov that supports content-based community and platform moderation, elections of community moderators, votes to remove abusive users, and more.
[ { "created": "Thu, 27 Jun 2024 17:33:23 GMT", "version": "v1" }, { "created": "Tue, 2 Jul 2024 20:25:10 GMT", "version": "v2" } ]
2024-07-04
[ [ "Namavari", "Armin", "" ], [ "Wang", "Barry", "" ], [ "Menda", "Sanketh", "" ], [ "Nassi", "Ben", "" ], [ "Tyagi", "Nirvan", "" ], [ "Grimmelmann", "James", "" ], [ "Zhang", "Amy", "" ], [ "Ristenpart", "Thomas", "" ] ]
The increasing harms caused by hate, harassment, and other forms of abuse online have motivated major platforms to explore hierarchical governance. The idea is to allow communities to have designated members take on moderation and leadership duties; meanwhile, members can still escalate issues to the platform. But these promising approaches have only been explored in plaintext settings where community content is public to the platform. It is unclear how one can realize hierarchical governance in the huge and increasing number of online communities that utilize end-to-end encrypted (E2EE) messaging for privacy. We propose private hierarchical governance systems. These should enable similar levels of community governance as in plaintext settings, while maintaining cryptographic privacy of content and governance actions not reported to the platform. We design the first such system, taking a layered approach that adds governance logic on top of an encrypted messaging protocol; we show how an extension to the message layer security (MLS) protocol suffices for achieving a rich set of governance policies. Our approach allows developers to rapidly prototype new governance features, taking inspiration from a plaintext system called PolicyKit. We build a prototype E2EE messaging system called MlsGov that supports content-based community and platform moderation, elections of community moderators, votes to remove abusive users, and more.
2107.02139
Lin Chen
Lin Chen, Hossein Esfandiari, Gang Fu, Vahab S. Mirrokni, Qian Yu
Feature Cross Search via Submodular Optimization
Accepted to ESA 2021. Authors are ordered alphabetically
null
null
null
cs.LG cs.AI cs.CC stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we study feature cross search as a fundamental primitive in feature engineering. The importance of feature cross search especially for the linear model has been known for a while, with well-known textbook examples. In this problem, the goal is to select a small subset of features, combine them to form a new feature (called the crossed feature) by considering their Cartesian product, and find feature crosses to learn an \emph{accurate} model. In particular, we study the problem of maximizing a normalized Area Under the Curve (AUC) of the linear model trained on the crossed feature column. First, we show that it is not possible to provide an $n^{1/\log\log n}$-approximation algorithm for this problem unless the exponential time hypothesis fails. This result also rules out the possibility of solving this problem in polynomial time unless $\mathsf{P}=\mathsf{NP}$. On the positive side, by assuming the \naive\ assumption, we show that there exists a simple greedy $(1-1/e)$-approximation algorithm for this problem. This result is established by relating the AUC to the total variation of the commutator of two probability measures and showing that the total variation of the commutator is monotone and submodular. To show this, we relate the submodularity of this function to the positive semi-definiteness of a corresponding kernel matrix. Then, we use Bochner's theorem to prove the positive semi-definiteness by showing that its inverse Fourier transform is non-negative everywhere. Our techniques and structural results might be of independent interest.
[ { "created": "Mon, 5 Jul 2021 16:58:31 GMT", "version": "v1" } ]
2021-07-06
[ [ "Chen", "Lin", "" ], [ "Esfandiari", "Hossein", "" ], [ "Fu", "Gang", "" ], [ "Mirrokni", "Vahab S.", "" ], [ "Yu", "Qian", "" ] ]
In this paper, we study feature cross search as a fundamental primitive in feature engineering. The importance of feature cross search especially for the linear model has been known for a while, with well-known textbook examples. In this problem, the goal is to select a small subset of features, combine them to form a new feature (called the crossed feature) by considering their Cartesian product, and find feature crosses to learn an \emph{accurate} model. In particular, we study the problem of maximizing a normalized Area Under the Curve (AUC) of the linear model trained on the crossed feature column. First, we show that it is not possible to provide an $n^{1/\log\log n}$-approximation algorithm for this problem unless the exponential time hypothesis fails. This result also rules out the possibility of solving this problem in polynomial time unless $\mathsf{P}=\mathsf{NP}$. On the positive side, by assuming the \naive\ assumption, we show that there exists a simple greedy $(1-1/e)$-approximation algorithm for this problem. This result is established by relating the AUC to the total variation of the commutator of two probability measures and showing that the total variation of the commutator is monotone and submodular. To show this, we relate the submodularity of this function to the positive semi-definiteness of a corresponding kernel matrix. Then, we use Bochner's theorem to prove the positive semi-definiteness by showing that its inverse Fourier transform is non-negative everywhere. Our techniques and structural results might be of independent interest.
2304.03502
Jaeho Jeong
Jaeho Jeong, Hosung Park, Hee-Youl Kwak, Jong-Seon No, Hahyeon Jeon, Jeong Wook Lee, Jae-Won Kim
Iterative Soft Decoding Algorithm for DNA Storage Using Quality Score and Redecoding
null
null
10.1109/TNB.2023.3284406
null
cs.IT math.IT
http://creativecommons.org/licenses/by/4.0/
Ever since deoxyribonucleic acid (DNA) was considered as a next-generation data-storage medium, lots of research efforts have been made to correct errors occurred during the synthesis, storage, and sequencing processes using error correcting codes (ECCs). Previous works on recovering the data from the sequenced DNA pool with errors have utilized hard decoding algorithms based on a majority decision rule. To improve the correction capability of ECCs and robustness of the DNA storage system, we propose a new iterative soft decoding algorithm, where soft information is obtained from FASTQ files and channel statistics. In particular, we propose a new formula for log-likelihood ratio (LLR) calculation using quality scores (Q-scores) and a redecoding method which may be suitable for the error correction and detection in the DNA sequencing area. Based on the widely adopted encoding scheme of the fountain code structure proposed by Erlich et al., we use three different sets of sequenced data to show consistency for the performance evaluation. The proposed soft decoding algorithm gives 2.3% ~ 7.0% improvement of the reading number reduction compared to the state-of-the-art decoding method and it is shown that it can deal with erroneous sequenced oligo reads with insertion and deletion errors.
[ { "created": "Fri, 7 Apr 2023 06:47:00 GMT", "version": "v1" } ]
2023-06-14
[ [ "Jeong", "Jaeho", "" ], [ "Park", "Hosung", "" ], [ "Kwak", "Hee-Youl", "" ], [ "No", "Jong-Seon", "" ], [ "Jeon", "Hahyeon", "" ], [ "Lee", "Jeong Wook", "" ], [ "Kim", "Jae-Won", "" ] ]
Ever since deoxyribonucleic acid (DNA) was considered as a next-generation data-storage medium, lots of research efforts have been made to correct errors occurred during the synthesis, storage, and sequencing processes using error correcting codes (ECCs). Previous works on recovering the data from the sequenced DNA pool with errors have utilized hard decoding algorithms based on a majority decision rule. To improve the correction capability of ECCs and robustness of the DNA storage system, we propose a new iterative soft decoding algorithm, where soft information is obtained from FASTQ files and channel statistics. In particular, we propose a new formula for log-likelihood ratio (LLR) calculation using quality scores (Q-scores) and a redecoding method which may be suitable for the error correction and detection in the DNA sequencing area. Based on the widely adopted encoding scheme of the fountain code structure proposed by Erlich et al., we use three different sets of sequenced data to show consistency for the performance evaluation. The proposed soft decoding algorithm gives 2.3% ~ 7.0% improvement of the reading number reduction compared to the state-of-the-art decoding method and it is shown that it can deal with erroneous sequenced oligo reads with insertion and deletion errors.
2211.12581
Chris Cameron
Chris Cameron, Jason Hartford, Taylor Lundy, Tuan Truong, Alan Milligan, Rex Chen, Kevin Leyton-Brown
UNSAT Solver Synthesis via Monte Carlo Forest Search
null
null
10.1007/978-3-031-60597-0_12
null
cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
We introduce Monte Carlo Forest Search (MCFS), a class of reinforcement learning (RL) algorithms for learning policies in {tree MDPs}, for which policy execution involves traversing an exponential-sized tree. Examples of such problems include proving unsatisfiability of a SAT formula; counting the number of solutions of a satisfiable SAT formula; and finding the optimal solution to a mixed-integer program. MCFS algorithms can be seen as extensions of Monte Carlo Tree Search (MCTS) to cases where, rather than finding a good path (solution) within a tree, the problem is to find a small tree within a forest of candidate trees. We instantiate and evaluate our ideas in an algorithm that we dub Knuth Synthesis, an MCFS algorithm that learns DPLL branching policies for solving the Boolean satisfiability (SAT) problem, with the objective of achieving good average-case performance on a given distribution of unsatisfiable problem instances. Knuth Synthesis is the first RL approach to avoid the prohibitive costs of policy evaluations in an exponentially-sized tree, leveraging two key ideas: first, we estimate tree size by randomly sampling paths and measuring their lengths, drawing on an unbiased approximation due to Knuth (1975); second, we query a strong solver at a user-defined depth rather than learning a policy across the whole tree, to focus our policy search on early decisions that offer the greatest potential for reducing tree size. We matched or exceeded the performance of a strong baseline on three well-known SAT distributions, facing problems that were two orders of magnitude more challenging than those addressed in previous RL studies.
[ { "created": "Tue, 22 Nov 2022 20:52:50 GMT", "version": "v1" }, { "created": "Fri, 26 May 2023 00:02:42 GMT", "version": "v2" }, { "created": "Sat, 13 Jul 2024 02:55:33 GMT", "version": "v3" } ]
2024-07-16
[ [ "Cameron", "Chris", "" ], [ "Hartford", "Jason", "" ], [ "Lundy", "Taylor", "" ], [ "Truong", "Tuan", "" ], [ "Milligan", "Alan", "" ], [ "Chen", "Rex", "" ], [ "Leyton-Brown", "Kevin", "" ] ]
We introduce Monte Carlo Forest Search (MCFS), a class of reinforcement learning (RL) algorithms for learning policies in {tree MDPs}, for which policy execution involves traversing an exponential-sized tree. Examples of such problems include proving unsatisfiability of a SAT formula; counting the number of solutions of a satisfiable SAT formula; and finding the optimal solution to a mixed-integer program. MCFS algorithms can be seen as extensions of Monte Carlo Tree Search (MCTS) to cases where, rather than finding a good path (solution) within a tree, the problem is to find a small tree within a forest of candidate trees. We instantiate and evaluate our ideas in an algorithm that we dub Knuth Synthesis, an MCFS algorithm that learns DPLL branching policies for solving the Boolean satisfiability (SAT) problem, with the objective of achieving good average-case performance on a given distribution of unsatisfiable problem instances. Knuth Synthesis is the first RL approach to avoid the prohibitive costs of policy evaluations in an exponentially-sized tree, leveraging two key ideas: first, we estimate tree size by randomly sampling paths and measuring their lengths, drawing on an unbiased approximation due to Knuth (1975); second, we query a strong solver at a user-defined depth rather than learning a policy across the whole tree, to focus our policy search on early decisions that offer the greatest potential for reducing tree size. We matched or exceeded the performance of a strong baseline on three well-known SAT distributions, facing problems that were two orders of magnitude more challenging than those addressed in previous RL studies.
1704.08347
Jiachun Liao
Jiachun Liao, Lalitha Sankar, Vincent Y. F. Tan and Flavio P. Calmon
Hypothesis Testing under Mutual Information Privacy Constraints in the High Privacy Regime
13 pages, 7 figures. The paper is submitted to "Transactions on Information Forensics & Security". Comparing to the paper arXiv:1607.00533 "Hypothesis Testing in the High Privacy Limit", the overlapping content is results for binary hypothesis testing with a zero error exponent, and the extended contents are the results for both m-ary hypothesis testing and binary hypothesis testing with nonzero error exponents
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Hypothesis testing is a statistical inference framework for determining the true distribution among a set of possible distributions for a given dataset. Privacy restrictions may require the curator of the data or the respondents themselves to share data with the test only after applying a randomizing privacy mechanism. This work considers mutual information (MI) as the privacy metric for measuring leakage. In addition, motivated by the Chernoff-Stein lemma, the relative entropy between pairs of distributions of the output (generated by the privacy mechanism) is chosen as the utility metric. For these metrics, the goal is to find the optimal privacy-utility trade-off (PUT) and the corresponding optimal privacy mechanism for both binary and m-ary hypothesis testing. Focusing on the high privacy regime, Euclidean information-theoretic approximations of the binary and m-ary PUT problems are developed. The solutions for the approximation problems clarify that an MI-based privacy metric preserves the privacy of the source symbols in inverse proportion to their likelihoods.
[ { "created": "Wed, 26 Apr 2017 20:48:58 GMT", "version": "v1" } ]
2017-04-28
[ [ "Liao", "Jiachun", "" ], [ "Sankar", "Lalitha", "" ], [ "Tan", "Vincent Y. F.", "" ], [ "Calmon", "Flavio P.", "" ] ]
Hypothesis testing is a statistical inference framework for determining the true distribution among a set of possible distributions for a given dataset. Privacy restrictions may require the curator of the data or the respondents themselves to share data with the test only after applying a randomizing privacy mechanism. This work considers mutual information (MI) as the privacy metric for measuring leakage. In addition, motivated by the Chernoff-Stein lemma, the relative entropy between pairs of distributions of the output (generated by the privacy mechanism) is chosen as the utility metric. For these metrics, the goal is to find the optimal privacy-utility trade-off (PUT) and the corresponding optimal privacy mechanism for both binary and m-ary hypothesis testing. Focusing on the high privacy regime, Euclidean information-theoretic approximations of the binary and m-ary PUT problems are developed. The solutions for the approximation problems clarify that an MI-based privacy metric preserves the privacy of the source symbols in inverse proportion to their likelihoods.
1910.00308
Martin Schirneck
Thomas Bl\"asius, Tobias Friedrich, Martin Schirneck
The Minimization of Random Hypergraphs
28 pages, 2 figures; Changes: binomial characterization unified, improvement of the Chernoff-Hoeffding theorem extended to case x --> p
null
null
null
cs.DM cs.DS math.CO math.PR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We investigate the maximum-entropy model $\mathcal{B}_{n,m,p}$ for random $n$-vertex, $m$-edge multi-hypergraphs with expected edge size $pn$. We show that the expected size of the minimization of $\mathcal{B}_{n,m,p}$, i.e., the number of its inclusion-wise minimal edges, undergoes a phase transition with respect to $m$. If $m$ is at most $1/(1-p)^{(1-p)n}$, then the minimization is of size $\Theta(m)$. Beyond that point, for $\alpha$ such that $m = 1/(1-p)^{\alpha n}$ and $\mathrm{H}$ being the entropy function, it is $\Theta(1) \cdot \min\!\left(1, \, \frac{1}{(\alpha\,{-}\,(1-p)) \sqrt{(1\,{-}\,\alpha) n}}\right) \cdot 2^{(\mathrm{H}(\alpha) + (1-\alpha) \log_2 p) n}.$ This implies that the maximum expected size over all $m$ is $\Theta((1+p)^n/\sqrt{n})$. Our structural findings have algorithmic implications for minimizing an input hypergraph, which in turn has applications in the profiling of relational databases as well as for the Orthogonal Vectors problem studied in fine-grained complexity. The main technical tool is an improvement of the Chernoff--Hoeffding inequality, which we make tight up to constant factors. We show that for a binomial variable $X \sim \mathrm{Bin}(n,p)$ and real number $0 < x \le p$, it holds that $\mathrm{P}[X \le xn] = \Theta(1) \cdot \min\!\left(1, \, \frac{1}{(p-x) \sqrt{xn}}\right) \cdot 2^{-\!\mathrm{D}(x \,{\|}\, p) n}$, where $\mathrm{D}$ denotes the Kullback--Leibler divergence between Bernoulli distributions. The result remains true if $x$ depends on $n$ as long as it is bounded away from $0$.
[ { "created": "Tue, 1 Oct 2019 11:23:19 GMT", "version": "v1" }, { "created": "Thu, 13 Feb 2020 15:32:14 GMT", "version": "v2" }, { "created": "Fri, 30 Oct 2020 11:17:31 GMT", "version": "v3" } ]
2020-11-03
[ [ "Bläsius", "Thomas", "" ], [ "Friedrich", "Tobias", "" ], [ "Schirneck", "Martin", "" ] ]
We investigate the maximum-entropy model $\mathcal{B}_{n,m,p}$ for random $n$-vertex, $m$-edge multi-hypergraphs with expected edge size $pn$. We show that the expected size of the minimization of $\mathcal{B}_{n,m,p}$, i.e., the number of its inclusion-wise minimal edges, undergoes a phase transition with respect to $m$. If $m$ is at most $1/(1-p)^{(1-p)n}$, then the minimization is of size $\Theta(m)$. Beyond that point, for $\alpha$ such that $m = 1/(1-p)^{\alpha n}$ and $\mathrm{H}$ being the entropy function, it is $\Theta(1) \cdot \min\!\left(1, \, \frac{1}{(\alpha\,{-}\,(1-p)) \sqrt{(1\,{-}\,\alpha) n}}\right) \cdot 2^{(\mathrm{H}(\alpha) + (1-\alpha) \log_2 p) n}.$ This implies that the maximum expected size over all $m$ is $\Theta((1+p)^n/\sqrt{n})$. Our structural findings have algorithmic implications for minimizing an input hypergraph, which in turn has applications in the profiling of relational databases as well as for the Orthogonal Vectors problem studied in fine-grained complexity. The main technical tool is an improvement of the Chernoff--Hoeffding inequality, which we make tight up to constant factors. We show that for a binomial variable $X \sim \mathrm{Bin}(n,p)$ and real number $0 < x \le p$, it holds that $\mathrm{P}[X \le xn] = \Theta(1) \cdot \min\!\left(1, \, \frac{1}{(p-x) \sqrt{xn}}\right) \cdot 2^{-\!\mathrm{D}(x \,{\|}\, p) n}$, where $\mathrm{D}$ denotes the Kullback--Leibler divergence between Bernoulli distributions. The result remains true if $x$ depends on $n$ as long as it is bounded away from $0$.
2011.03915
Kun He
Weiming Feng, Kun He, Yitong Yin
Sampling Constraint Satisfaction Solutions in the Local Lemma Regime
null
null
null
null
cs.DS cs.DM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We give a Markov chain based algorithm for sampling almost uniform solutions of constraint satisfaction problems (CSPs). Assuming a canonical setting for the Lov\'asz local lemma, where each constraint is violated by a small number of forbidden local configurations, our sampling algorithm is accurate in a local lemma regime, and the running time is a fixed polynomial whose dependency on $n$ is close to linear, where $n$ is the number of variables. Our main approach is a new technique called state compression, which generalizes the "mark/unmark" paradigm of Moitra (Moitra, JACM, 2019), and can give fast local-lemma-based sampling algorithms. As concrete applications of our technique, we give the current best almost-uniform samplers for hypergraph colorings and for CNF solutions.
[ { "created": "Sun, 8 Nov 2020 07:33:52 GMT", "version": "v1" }, { "created": "Sat, 10 Apr 2021 04:53:23 GMT", "version": "v2" } ]
2021-04-13
[ [ "Feng", "Weiming", "" ], [ "He", "Kun", "" ], [ "Yin", "Yitong", "" ] ]
We give a Markov chain based algorithm for sampling almost uniform solutions of constraint satisfaction problems (CSPs). Assuming a canonical setting for the Lov\'asz local lemma, where each constraint is violated by a small number of forbidden local configurations, our sampling algorithm is accurate in a local lemma regime, and the running time is a fixed polynomial whose dependency on $n$ is close to linear, where $n$ is the number of variables. Our main approach is a new technique called state compression, which generalizes the "mark/unmark" paradigm of Moitra (Moitra, JACM, 2019), and can give fast local-lemma-based sampling algorithms. As concrete applications of our technique, we give the current best almost-uniform samplers for hypergraph colorings and for CNF solutions.
2311.01739
John Tramm
John Tramm and Bryce Allen and Kazutomo Yoshii and Andrew Siegel and Leighton Wilson
Efficient Algorithms for Monte Carlo Particle Transport on AI Accelerator Hardware
null
null
null
null
cs.DC cs.PF
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The recent trend toward deep learning has led to the development of a variety of highly innovative AI accelerator architectures. One such architecture, the Cerebras Wafer-Scale Engine 2 (WSE-2), features 40 GB of on-chip SRAM, making it a potentially attractive platform for latency- or bandwidth-bound HPC simulation workloads. In this study, we examine the feasibility of performing continuous energy Monte Carlo (MC) particle transport on the WSE-2 by porting a key kernel from the MC transport algorithm to Cerebras's CSL programming model. New algorithms for minimizing communication costs and for handling load balancing are developed and tested. The WSE-2 is found to run 130 times faster than a highly optimized CUDA version of the kernel run on an NVIDIA A100 GPU -- significantly outpacing the expected performance increase given the difference in transistor counts between the architectures.
[ { "created": "Fri, 3 Nov 2023 06:27:36 GMT", "version": "v1" }, { "created": "Tue, 7 Nov 2023 03:22:56 GMT", "version": "v2" } ]
2023-11-08
[ [ "Tramm", "John", "" ], [ "Allen", "Bryce", "" ], [ "Yoshii", "Kazutomo", "" ], [ "Siegel", "Andrew", "" ], [ "Wilson", "Leighton", "" ] ]
The recent trend toward deep learning has led to the development of a variety of highly innovative AI accelerator architectures. One such architecture, the Cerebras Wafer-Scale Engine 2 (WSE-2), features 40 GB of on-chip SRAM, making it a potentially attractive platform for latency- or bandwidth-bound HPC simulation workloads. In this study, we examine the feasibility of performing continuous energy Monte Carlo (MC) particle transport on the WSE-2 by porting a key kernel from the MC transport algorithm to Cerebras's CSL programming model. New algorithms for minimizing communication costs and for handling load balancing are developed and tested. The WSE-2 is found to run 130 times faster than a highly optimized CUDA version of the kernel run on an NVIDIA A100 GPU -- significantly outpacing the expected performance increase given the difference in transistor counts between the architectures.
2108.08798
Roberto Machado
Roberto Assis Machado, Rafael G. L. D'Oliveira, Salim El Rouayheb and Daniel Heinlein
Field Trace Polynomial Codes for Secure Distributed Matrix Multiplication
null
2021 XVII International Symposium "Problems of Redundancy in Information and Control Systems" (REDUNDANCY)
10.1109/REDUNDANCY52534.2021.9606447
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider the problem of communication efficient secure distributed matrix multiplication. The previous literature has focused on reducing the number of servers as a proxy for minimizing communication costs. The intuition being, that the more servers used, the higher the communication cost. We show that this is not the case. Our central technique relies on adapting results from the literature on repairing Reed-Solomon codes where instead of downloading the whole of the computing task, a user downloads field traces of these computations. We present field trace polynomial codes, a family of codes, that explore this technique and characterize regimes for which our codes outperform the existing codes in the literature.
[ { "created": "Thu, 19 Aug 2021 17:16:12 GMT", "version": "v1" }, { "created": "Thu, 9 Jun 2022 17:32:38 GMT", "version": "v2" } ]
2022-06-10
[ [ "Machado", "Roberto Assis", "" ], [ "D'Oliveira", "Rafael G. L.", "" ], [ "Rouayheb", "Salim El", "" ], [ "Heinlein", "Daniel", "" ] ]
We consider the problem of communication efficient secure distributed matrix multiplication. The previous literature has focused on reducing the number of servers as a proxy for minimizing communication costs. The intuition being, that the more servers used, the higher the communication cost. We show that this is not the case. Our central technique relies on adapting results from the literature on repairing Reed-Solomon codes where instead of downloading the whole of the computing task, a user downloads field traces of these computations. We present field trace polynomial codes, a family of codes, that explore this technique and characterize regimes for which our codes outperform the existing codes in the literature.
1912.01107
Marco Peressotti
Fabio Burco and Marino Miculan and Marco Peressotti
Towards a Formal Model for Composable Container Systems
null
null
null
null
cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In modern cloud-based architectures, containers play a central role: they provide powerful isolation mechanisms such that developers can focus on the logic and dependencies of applications while system administrators can focus on deployment and management issue. In this work, we propose a formal model for container-based systems, using the framework of Bigraphical Reactive Systems (BRSs). We first introduce local directed bigraphs, a graph-based formalism which allows us to deal with localized resources. Then, we define a signature for modelling containers and provide some examples of bigraphs modelling containers. These graphs can be analysed and manipulated using techniques from graph theory: properties about containers can be formalized as properties of the corresponding bigraphic representations. Moreover, it turns out that the composition of containers as performed by e.g. docker-compose, corresponds precisely to the composition of the corresponding bigraphs inside an ``environment bigraph'' which in turn is obtained directly from the YAML file used to define the composition of containers.
[ { "created": "Mon, 2 Dec 2019 22:46:05 GMT", "version": "v1" } ]
2019-12-04
[ [ "Burco", "Fabio", "" ], [ "Miculan", "Marino", "" ], [ "Peressotti", "Marco", "" ] ]
In modern cloud-based architectures, containers play a central role: they provide powerful isolation mechanisms such that developers can focus on the logic and dependencies of applications while system administrators can focus on deployment and management issue. In this work, we propose a formal model for container-based systems, using the framework of Bigraphical Reactive Systems (BRSs). We first introduce local directed bigraphs, a graph-based formalism which allows us to deal with localized resources. Then, we define a signature for modelling containers and provide some examples of bigraphs modelling containers. These graphs can be analysed and manipulated using techniques from graph theory: properties about containers can be formalized as properties of the corresponding bigraphic representations. Moreover, it turns out that the composition of containers as performed by e.g. docker-compose, corresponds precisely to the composition of the corresponding bigraphs inside an ``environment bigraph'' which in turn is obtained directly from the YAML file used to define the composition of containers.
1409.1284
Sawood Alam
Sawood Alam and Fateh ud din B Mehmood and Michael L. Nelson
Improving Accessibility of Archived Raster Dictionaries of Complex Script Languages
11 pages, 5 images, 2 codes, 1 table
null
10.1145/2756406.2756926
null
cs.DL cs.IR
http://creativecommons.org/licenses/by-nc-sa/3.0/
We propose an approach to index raster images of dictionary pages which in turn would require very little manual effort to enable direct access to the appropriate pages of the dictionary for lookup. Accessibility is further improved by feedback and crowdsourcing that enables highlighting of the specific location on the page where the lookup word is found, annotation, digitization, and fielded searching. This approach is equally applicable on simple scripts as well as complex writing systems. Using our proposed approach, we have built a Web application called "Dictionary Explorer" which supports word indexes in various languages and every language can have multiple dictionaries associated with it. Word lookup gives direct access to appropriate pages of all the dictionaries of that language simultaneously. The application has exploration features like searching, pagination, and navigating the word index through a tree-like interface. The application also supports feedback, annotation, and digitization features. Apart from the scanned images, "Dictionary Explorer" aggregates results from various sources and user contributions in Unicode. We have evaluated the time required for indexing dictionaries of different sizes and complexities in the Urdu language and examined various trade-offs in our implementation. Using our approach, a single person can make a dictionary of 1,000 pages searchable in less than an hour.
[ { "created": "Wed, 3 Sep 2014 23:27:18 GMT", "version": "v1" } ]
2019-05-20
[ [ "Alam", "Sawood", "" ], [ "Mehmood", "Fateh ud din B", "" ], [ "Nelson", "Michael L.", "" ] ]
We propose an approach to index raster images of dictionary pages which in turn would require very little manual effort to enable direct access to the appropriate pages of the dictionary for lookup. Accessibility is further improved by feedback and crowdsourcing that enables highlighting of the specific location on the page where the lookup word is found, annotation, digitization, and fielded searching. This approach is equally applicable on simple scripts as well as complex writing systems. Using our proposed approach, we have built a Web application called "Dictionary Explorer" which supports word indexes in various languages and every language can have multiple dictionaries associated with it. Word lookup gives direct access to appropriate pages of all the dictionaries of that language simultaneously. The application has exploration features like searching, pagination, and navigating the word index through a tree-like interface. The application also supports feedback, annotation, and digitization features. Apart from the scanned images, "Dictionary Explorer" aggregates results from various sources and user contributions in Unicode. We have evaluated the time required for indexing dictionaries of different sizes and complexities in the Urdu language and examined various trade-offs in our implementation. Using our approach, a single person can make a dictionary of 1,000 pages searchable in less than an hour.
cs/0611133
Vladimir Migunov
Vladimir V. Migunov
The modelling of the automation schemes of technological processes in CAD-system of renovation of the enterprises
4 pages, 3 figures, in Russian
null
null
null
cs.CE
null
According to the requirements of the Russian standards, the automation schemes are necessary practically in each project of renovation of industrial buildings and facilities, in which any technological processes are realized. The model representations of the automation schemes in CAD-system TechnoCAD GlassX are described. The models follow a principle "to exclude a repeated input operations"
[ { "created": "Mon, 27 Nov 2006 04:39:09 GMT", "version": "v1" } ]
2007-05-23
[ [ "Migunov", "Vladimir V.", "" ] ]
According to the requirements of the Russian standards, the automation schemes are necessary practically in each project of renovation of industrial buildings and facilities, in which any technological processes are realized. The model representations of the automation schemes in CAD-system TechnoCAD GlassX are described. The models follow a principle "to exclude a repeated input operations"
1507.05890
Bart M. P. Jansen
Bart M. P. Jansen
On Structural Parameterizations of Hitting Set: Hitting Paths in Graphs Using 2-SAT
Presented at the 41st International Workshop on Graph-Theoretic Concepts in Computer Science, WG 2015. (The statement of Lemma 4 was corrected in this update.)
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Hitting Set is a classic problem in combinatorial optimization. Its input consists of a set system F over a finite universe U and an integer t; the question is whether there is a set of t elements that intersects every set in F. The Hitting Set problem parameterized by the size of the solution is a well-known W[2]-complete problem in parameterized complexity theory. In this paper we investigate the complexity of Hitting Set under various structural parameterizations of the input. Our starting point is the folklore result that Hitting Set is polynomial-time solvable if there is a tree T on vertex set U such that the sets in F induce connected subtrees of T. We consider the case that there is a treelike graph with vertex set U such that the sets in F induce connected subgraphs; the parameter of the problem is a measure of how treelike the graph is. Our main positive result is an algorithm that, given a graph G with cyclomatic number k, a collection P of simple paths in G, and an integer t, determines in time 2^{5k} (|G| +|P|)^O(1) whether there is a vertex set of size t that hits all paths in P. It is based on a connection to the 2-SAT problem in multiple valued logic. For other parameterizations we derive W[1]-hardness and para-NP-completeness results.
[ { "created": "Tue, 21 Jul 2015 16:04:51 GMT", "version": "v1" }, { "created": "Fri, 24 Jul 2015 13:55:37 GMT", "version": "v2" } ]
2015-07-27
[ [ "Jansen", "Bart M. P.", "" ] ]
Hitting Set is a classic problem in combinatorial optimization. Its input consists of a set system F over a finite universe U and an integer t; the question is whether there is a set of t elements that intersects every set in F. The Hitting Set problem parameterized by the size of the solution is a well-known W[2]-complete problem in parameterized complexity theory. In this paper we investigate the complexity of Hitting Set under various structural parameterizations of the input. Our starting point is the folklore result that Hitting Set is polynomial-time solvable if there is a tree T on vertex set U such that the sets in F induce connected subtrees of T. We consider the case that there is a treelike graph with vertex set U such that the sets in F induce connected subgraphs; the parameter of the problem is a measure of how treelike the graph is. Our main positive result is an algorithm that, given a graph G with cyclomatic number k, a collection P of simple paths in G, and an integer t, determines in time 2^{5k} (|G| +|P|)^O(1) whether there is a vertex set of size t that hits all paths in P. It is based on a connection to the 2-SAT problem in multiple valued logic. For other parameterizations we derive W[1]-hardness and para-NP-completeness results.
2005.12378
Varoon Mathur
Varoon Mathur, Saptarshi Purkayastha, Judy Wawira Gichoya
Artificial Intelligence for Global Health: Learning From a Decade of Digital Transformation in Health Care
Accepted Paper at ICLR 2020 Workshop on Practical ML for Developing Countries
null
null
null
cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The health needs of those living in resource-limited settings are a vastly overlooked and understudied area in the intersection of machine learning (ML) and health care. While the use of ML in health care is more recently popularized over the last few years from the advancement of deep learning, low-and-middle income countries (LMICs) have already been undergoing a digital transformation of their own in health care over the last decade, leapfrogging milestones due to the adoption of mobile health (mHealth). With the introduction of new technologies, it is common to start afresh with a top-down approach, and implement these technologies in isolation, leading to lack of use and a waste of resources. In this paper, we outline the necessary considerations both from the perspective of current gaps in research, as well as from the lived experiences of health care professionals in resource-limited settings. We also outline briefly several key components of successful implementation and deployment of technologies within health systems in LMICs, including technical and cultural considerations in the development process relevant to the building of machine learning solutions. We then draw on these experiences to address where key opportunities for impact exist in resource-limited settings, and where AI/ML can provide the most benefit.
[ { "created": "Wed, 20 May 2020 23:50:17 GMT", "version": "v1" }, { "created": "Wed, 27 May 2020 06:54:20 GMT", "version": "v2" } ]
2020-05-28
[ [ "Mathur", "Varoon", "" ], [ "Purkayastha", "Saptarshi", "" ], [ "Gichoya", "Judy Wawira", "" ] ]
The health needs of those living in resource-limited settings are a vastly overlooked and understudied area in the intersection of machine learning (ML) and health care. While the use of ML in health care is more recently popularized over the last few years from the advancement of deep learning, low-and-middle income countries (LMICs) have already been undergoing a digital transformation of their own in health care over the last decade, leapfrogging milestones due to the adoption of mobile health (mHealth). With the introduction of new technologies, it is common to start afresh with a top-down approach, and implement these technologies in isolation, leading to lack of use and a waste of resources. In this paper, we outline the necessary considerations both from the perspective of current gaps in research, as well as from the lived experiences of health care professionals in resource-limited settings. We also outline briefly several key components of successful implementation and deployment of technologies within health systems in LMICs, including technical and cultural considerations in the development process relevant to the building of machine learning solutions. We then draw on these experiences to address where key opportunities for impact exist in resource-limited settings, and where AI/ML can provide the most benefit.
2210.15360
Rui Liu
Yifan Hu, Rui Liu, Guanglai Gao, Haizhou Li
FCTalker: Fine and Coarse Grained Context Modeling for Expressive Conversational Speech Synthesis
5 pages, 4 figures, 1 table. Submitted to ICASSP 2023. We release the source code at: https://github.com/walker-hyf/FCTalker
null
null
null
cs.CL cs.SD eess.AS
http://creativecommons.org/licenses/by/4.0/
Conversational Text-to-Speech (TTS) aims to synthesis an utterance with the right linguistic and affective prosody in a conversational context. The correlation between the current utterance and the dialogue history at the utterance level was used to improve the expressiveness of synthesized speech. However, the fine-grained information in the dialogue history at the word level also has an important impact on the prosodic expression of an utterance, which has not been well studied in the prior work. Therefore, we propose a novel expressive conversational TTS model, termed as FCTalker, that learn the fine and coarse grained context dependency at the same time during speech generation. Specifically, the FCTalker includes fine and coarse grained encoders to exploit the word and utterance-level context dependency. To model the word-level dependencies between an utterance and its dialogue history, the fine-grained dialogue encoder is built on top of a dialogue BERT model. The experimental results show that the proposed method outperforms all baselines and generates more expressive speech that is contextually appropriate. We release the source code at: https://github.com/walker-hyf/FCTalker.
[ { "created": "Thu, 27 Oct 2022 12:20:20 GMT", "version": "v1" } ]
2022-10-28
[ [ "Hu", "Yifan", "" ], [ "Liu", "Rui", "" ], [ "Gao", "Guanglai", "" ], [ "Li", "Haizhou", "" ] ]
Conversational Text-to-Speech (TTS) aims to synthesis an utterance with the right linguistic and affective prosody in a conversational context. The correlation between the current utterance and the dialogue history at the utterance level was used to improve the expressiveness of synthesized speech. However, the fine-grained information in the dialogue history at the word level also has an important impact on the prosodic expression of an utterance, which has not been well studied in the prior work. Therefore, we propose a novel expressive conversational TTS model, termed as FCTalker, that learn the fine and coarse grained context dependency at the same time during speech generation. Specifically, the FCTalker includes fine and coarse grained encoders to exploit the word and utterance-level context dependency. To model the word-level dependencies between an utterance and its dialogue history, the fine-grained dialogue encoder is built on top of a dialogue BERT model. The experimental results show that the proposed method outperforms all baselines and generates more expressive speech that is contextually appropriate. We release the source code at: https://github.com/walker-hyf/FCTalker.
2207.09530
Gilberto Ochoa-Ruiz
Pedro E. Chavarrias-Solanon and Mansoor Ali-Teevno and Gilberto Ochoa-Ruiz and Sharib Ali
Knowledge distillation with a class-aware loss for endoscopic disease detection
Paper accepted at the CaPTion workshop at MICCAI2022
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by/4.0/
Prevalence of gastrointestinal (GI) cancer is growing alarmingly every year leading to a substantial increase in the mortality rate. Endoscopic detection is providing crucial diagnostic support, however, subtle lesions in upper and lower GI are quite hard to detect and cause considerable missed detection. In this work, we leverage deep learning to develop a framework to improve the localization of difficult to detect lesions and minimize the missed detection rate. We propose an end to end student-teacher learning setup where class probabilities of a trained teacher model on one class with larger dataset are used to penalize multi-class student network. Our model achieves higher performance in terms of mean average precision (mAP) on both endoscopic disease detection (EDD2020) challenge and Kvasir-SEG datasets. Additionally, we show that using such learning paradigm, our model is generalizable to unseen test set giving higher APs for clinically crucial neoplastic and polyp categories
[ { "created": "Tue, 19 Jul 2022 19:56:12 GMT", "version": "v1" } ]
2022-07-21
[ [ "Chavarrias-Solanon", "Pedro E.", "" ], [ "Ali-Teevno", "Mansoor", "" ], [ "Ochoa-Ruiz", "Gilberto", "" ], [ "Ali", "Sharib", "" ] ]
Prevalence of gastrointestinal (GI) cancer is growing alarmingly every year leading to a substantial increase in the mortality rate. Endoscopic detection is providing crucial diagnostic support, however, subtle lesions in upper and lower GI are quite hard to detect and cause considerable missed detection. In this work, we leverage deep learning to develop a framework to improve the localization of difficult to detect lesions and minimize the missed detection rate. We propose an end to end student-teacher learning setup where class probabilities of a trained teacher model on one class with larger dataset are used to penalize multi-class student network. Our model achieves higher performance in terms of mean average precision (mAP) on both endoscopic disease detection (EDD2020) challenge and Kvasir-SEG datasets. Additionally, we show that using such learning paradigm, our model is generalizable to unseen test set giving higher APs for clinically crucial neoplastic and polyp categories
1408.5987
Frank Neumann
Sergey Polyakovskiy, Rudolf Berghammer, Frank Neumann
Solving Hard Control Problems in Voting Systems via Integer Programming
null
null
null
null
cs.GT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Voting problems are central in the area of social choice. In this article, we investigate various voting systems and types of control of elections. We present integer linear programming (ILP) formulations for a wide range of NP-hard control problems. Our ILP formulations are flexible in the sense that they can work with an arbitrary number of candidates and voters. Using the off-the-shelf solver Cplex, we show that our approaches can manipulate elections with a large number of voters and candidates efficiently.
[ { "created": "Tue, 26 Aug 2014 02:51:50 GMT", "version": "v1" }, { "created": "Wed, 2 Sep 2015 01:10:24 GMT", "version": "v2" } ]
2015-09-03
[ [ "Polyakovskiy", "Sergey", "" ], [ "Berghammer", "Rudolf", "" ], [ "Neumann", "Frank", "" ] ]
Voting problems are central in the area of social choice. In this article, we investigate various voting systems and types of control of elections. We present integer linear programming (ILP) formulations for a wide range of NP-hard control problems. Our ILP formulations are flexible in the sense that they can work with an arbitrary number of candidates and voters. Using the off-the-shelf solver Cplex, we show that our approaches can manipulate elections with a large number of voters and candidates efficiently.
2307.04725
Yuwei Guo
Yuwei Guo, Ceyuan Yang, Anyi Rao, Zhengyang Liang, Yaohui Wang, Yu Qiao, Maneesh Agrawala, Dahua Lin, Bo Dai
AnimateDiff: Animate Your Personalized Text-to-Image Diffusion Models without Specific Tuning
Codes and Supplementary Material: https://github.com/guoyww/AnimateDiff
null
null
null
cs.CV cs.GR cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the advance of text-to-image (T2I) diffusion models (e.g., Stable Diffusion) and corresponding personalization techniques such as DreamBooth and LoRA, everyone can manifest their imagination into high-quality images at an affordable cost. However, adding motion dynamics to existing high-quality personalized T2Is and enabling them to generate animations remains an open challenge. In this paper, we present AnimateDiff, a practical framework for animating personalized T2I models without requiring model-specific tuning. At the core of our framework is a plug-and-play motion module that can be trained once and seamlessly integrated into any personalized T2Is originating from the same base T2I. Through our proposed training strategy, the motion module effectively learns transferable motion priors from real-world videos. Once trained, the motion module can be inserted into a personalized T2I model to form a personalized animation generator. We further propose MotionLoRA, a lightweight fine-tuning technique for AnimateDiff that enables a pre-trained motion module to adapt to new motion patterns, such as different shot types, at a low training and data collection cost. We evaluate AnimateDiff and MotionLoRA on several public representative personalized T2I models collected from the community. The results demonstrate that our approaches help these models generate temporally smooth animation clips while preserving the visual quality and motion diversity. Codes and pre-trained weights are available at https://github.com/guoyww/AnimateDiff.
[ { "created": "Mon, 10 Jul 2023 17:34:16 GMT", "version": "v1" }, { "created": "Thu, 8 Feb 2024 18:08:57 GMT", "version": "v2" } ]
2024-02-09
[ [ "Guo", "Yuwei", "" ], [ "Yang", "Ceyuan", "" ], [ "Rao", "Anyi", "" ], [ "Liang", "Zhengyang", "" ], [ "Wang", "Yaohui", "" ], [ "Qiao", "Yu", "" ], [ "Agrawala", "Maneesh", "" ], [ "Lin", "Dahua", "" ], [ "Dai", "Bo", "" ] ]
With the advance of text-to-image (T2I) diffusion models (e.g., Stable Diffusion) and corresponding personalization techniques such as DreamBooth and LoRA, everyone can manifest their imagination into high-quality images at an affordable cost. However, adding motion dynamics to existing high-quality personalized T2Is and enabling them to generate animations remains an open challenge. In this paper, we present AnimateDiff, a practical framework for animating personalized T2I models without requiring model-specific tuning. At the core of our framework is a plug-and-play motion module that can be trained once and seamlessly integrated into any personalized T2Is originating from the same base T2I. Through our proposed training strategy, the motion module effectively learns transferable motion priors from real-world videos. Once trained, the motion module can be inserted into a personalized T2I model to form a personalized animation generator. We further propose MotionLoRA, a lightweight fine-tuning technique for AnimateDiff that enables a pre-trained motion module to adapt to new motion patterns, such as different shot types, at a low training and data collection cost. We evaluate AnimateDiff and MotionLoRA on several public representative personalized T2I models collected from the community. The results demonstrate that our approaches help these models generate temporally smooth animation clips while preserving the visual quality and motion diversity. Codes and pre-trained weights are available at https://github.com/guoyww/AnimateDiff.
2206.10944
Alexey Skrynnik
Alexey Skrynnik, Anton Andreychuk, Konstantin Yakovlev, Aleksandr I. Panov
POGEMA: Partially Observable Grid Environment for Multiple Agents
7 pages, 7 figures
null
null
null
cs.LG cs.AI cs.MA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce POGEMA (https://github.com/AIRI-Institute/pogema) a sandbox for challenging partially observable multi-agent pathfinding (PO-MAPF) problems . This is a grid-based environment that was specifically designed to be a flexible, tunable and scalable benchmark. It can be tailored to a variety of PO-MAPF, which can serve as an excellent testing ground for planning and learning methods, and their combination, which will allow us to move towards filling the gap between AI planning and learning.
[ { "created": "Wed, 22 Jun 2022 09:39:50 GMT", "version": "v1" } ]
2022-06-23
[ [ "Skrynnik", "Alexey", "" ], [ "Andreychuk", "Anton", "" ], [ "Yakovlev", "Konstantin", "" ], [ "Panov", "Aleksandr I.", "" ] ]
We introduce POGEMA (https://github.com/AIRI-Institute/pogema) a sandbox for challenging partially observable multi-agent pathfinding (PO-MAPF) problems . This is a grid-based environment that was specifically designed to be a flexible, tunable and scalable benchmark. It can be tailored to a variety of PO-MAPF, which can serve as an excellent testing ground for planning and learning methods, and their combination, which will allow us to move towards filling the gap between AI planning and learning.
2407.19674
Cui Fangming
Fangming Cui, Xun Yang, Chao Wu, Liang Xiao, Xinmei Tian
Advancing Prompt Learning through an External Layer
null
null
10.1145/3664647.3680953
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Prompt learning represents a promising method for adapting pre-trained vision-language models (VLMs) to various downstream tasks by learning a set of text embeddings. One challenge inherent to these methods is the poor generalization performance due to the invalidity of the learned text embeddings for unseen tasks. A straightforward approach to bridge this gap is to freeze the text embeddings in prompts, which results in a lack of capacity to adapt VLMs for downstream tasks. To address this dilemma, we propose a paradigm called EnPrompt with a novel External Layer (EnLa). Specifically, we propose a textual external layer and learnable visual embeddings for adapting VLMs to downstream tasks. The learnable external layer is built upon valid embeddings of pre-trained CLIP. This design considers the balance of learning capabilities between the two branches. To align the textual and visual features, we propose a novel two-pronged approach: i) we introduce the optimal transport as the discrepancy metric to align the vision and text modalities, and ii) we introduce a novel strengthening feature to enhance the interaction between these two modalities. Four representative experiments (i.e., base-to-novel generalization, few-shot learning, cross-dataset generalization, domain shifts generalization) across 15 datasets demonstrate that our method outperforms the existing prompt learning method.
[ { "created": "Mon, 29 Jul 2024 03:30:09 GMT", "version": "v1" }, { "created": "Tue, 30 Jul 2024 05:26:13 GMT", "version": "v2" }, { "created": "Wed, 7 Aug 2024 17:45:05 GMT", "version": "v3" }, { "created": "Thu, 8 Aug 2024 02:39:15 GMT", "version": "v4" }, { "created": "Fri, 9 Aug 2024 06:09:44 GMT", "version": "v5" } ]
2024-08-12
[ [ "Cui", "Fangming", "" ], [ "Yang", "Xun", "" ], [ "Wu", "Chao", "" ], [ "Xiao", "Liang", "" ], [ "Tian", "Xinmei", "" ] ]
Prompt learning represents a promising method for adapting pre-trained vision-language models (VLMs) to various downstream tasks by learning a set of text embeddings. One challenge inherent to these methods is the poor generalization performance due to the invalidity of the learned text embeddings for unseen tasks. A straightforward approach to bridge this gap is to freeze the text embeddings in prompts, which results in a lack of capacity to adapt VLMs for downstream tasks. To address this dilemma, we propose a paradigm called EnPrompt with a novel External Layer (EnLa). Specifically, we propose a textual external layer and learnable visual embeddings for adapting VLMs to downstream tasks. The learnable external layer is built upon valid embeddings of pre-trained CLIP. This design considers the balance of learning capabilities between the two branches. To align the textual and visual features, we propose a novel two-pronged approach: i) we introduce the optimal transport as the discrepancy metric to align the vision and text modalities, and ii) we introduce a novel strengthening feature to enhance the interaction between these two modalities. Four representative experiments (i.e., base-to-novel generalization, few-shot learning, cross-dataset generalization, domain shifts generalization) across 15 datasets demonstrate that our method outperforms the existing prompt learning method.
2203.14581
Shasha Mei
Shasha Mei
S2-Net: Self-supervision Guided Feature Representation Learning for Cross-Modality Images
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Combining the respective advantages of cross-modality images can compensate for the lack of information in the single modality, which has attracted increasing attention of researchers into multi-modal image matching tasks. Meanwhile, due to the great appearance differences between cross-modality image pairs, it often fails to make the feature representations of correspondences as close as possible. In this letter, we design a cross-modality feature representation learning network, S2-Net, which is based on the recently successful detect-and-describe pipeline, originally proposed for visible images but adapted to work with cross-modality image pairs. To solve the consequent problem of optimization difficulties, we introduce self-supervised learning with a well-designed loss function to guide the training without discarding the original advantages. This novel strategy simulates image pairs in the same modality, which is also a useful guide for the training of cross-modality images. Notably, it does not require additional data but significantly improves the performance and is even workable for all methods of the detect-and-describe pipeline. Extensive experiments are conducted to evaluate the performance of the strategy we proposed, compared to both handcrafted and deep learning-based methods. Results show that our elegant formulation of combined optimization of supervised and self-supervised learning outperforms state-of-the-arts on RoadScene and RGB-NIR datasets.
[ { "created": "Mon, 28 Mar 2022 08:47:49 GMT", "version": "v1" } ]
2022-03-29
[ [ "Mei", "Shasha", "" ] ]
Combining the respective advantages of cross-modality images can compensate for the lack of information in the single modality, which has attracted increasing attention of researchers into multi-modal image matching tasks. Meanwhile, due to the great appearance differences between cross-modality image pairs, it often fails to make the feature representations of correspondences as close as possible. In this letter, we design a cross-modality feature representation learning network, S2-Net, which is based on the recently successful detect-and-describe pipeline, originally proposed for visible images but adapted to work with cross-modality image pairs. To solve the consequent problem of optimization difficulties, we introduce self-supervised learning with a well-designed loss function to guide the training without discarding the original advantages. This novel strategy simulates image pairs in the same modality, which is also a useful guide for the training of cross-modality images. Notably, it does not require additional data but significantly improves the performance and is even workable for all methods of the detect-and-describe pipeline. Extensive experiments are conducted to evaluate the performance of the strategy we proposed, compared to both handcrafted and deep learning-based methods. Results show that our elegant formulation of combined optimization of supervised and self-supervised learning outperforms state-of-the-arts on RoadScene and RGB-NIR datasets.
1702.00585
Massimo Franceschet
Massimo Franceschet and Enrico Bozzo
The temporalized Massey's method
arXiv admin note: text overlap with arXiv:1701.03363
null
null
null
cs.SI physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose and throughly investigate a temporalized version of the popular Massey's technique for rating actors in sport competitions. The method can be described as a dynamic temporal process in which team ratings are updated at every match according to their performance during the match and the strength of the opponent team. Using the Italian soccer dataset, we empirically show that the method has a good foresight prediction accuracy.
[ { "created": "Thu, 2 Feb 2017 08:54:32 GMT", "version": "v1" } ]
2017-02-03
[ [ "Franceschet", "Massimo", "" ], [ "Bozzo", "Enrico", "" ] ]
We propose and throughly investigate a temporalized version of the popular Massey's technique for rating actors in sport competitions. The method can be described as a dynamic temporal process in which team ratings are updated at every match according to their performance during the match and the strength of the opponent team. Using the Italian soccer dataset, we empirically show that the method has a good foresight prediction accuracy.
2102.04010
Aojun Zhou
Aojun Zhou, Yukun Ma, Junnan Zhu, Jianbo Liu, Zhijie Zhang, Kun Yuan, Wenxiu Sun, Hongsheng Li
Learning N:M Fine-grained Structured Sparse Neural Networks From Scratch
ICLR2021
null
null
null
cs.CV cs.AR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Sparsity in Deep Neural Networks (DNNs) has been widely studied to compress and accelerate the models on resource-constrained environments. It can be generally categorized into unstructured fine-grained sparsity that zeroes out multiple individual weights distributed across the neural network, and structured coarse-grained sparsity which prunes blocks of sub-networks of a neural network. Fine-grained sparsity can achieve a high compression ratio but is not hardware friendly and hence receives limited speed gains. On the other hand, coarse-grained sparsity cannot concurrently achieve both apparent acceleration on modern GPUs and decent performance. In this paper, we are the first to study training from scratch an N:M fine-grained structured sparse network, which can maintain the advantages of both unstructured fine-grained sparsity and structured coarse-grained sparsity simultaneously on specifically designed GPUs. Specifically, a 2:4 sparse network could achieve 2x speed-up without performance drop on Nvidia A100 GPUs. Furthermore, we propose a novel and effective ingredient, sparse-refined straight-through estimator (SR-STE), to alleviate the negative influence of the approximated gradients computed by vanilla STE during optimization. We also define a metric, Sparse Architecture Divergence (SAD), to measure the sparse network's topology change during the training process. Finally, We justify SR-STE's advantages with SAD and demonstrate the effectiveness of SR-STE by performing comprehensive experiments on various tasks. Source codes and models are available at https://github.com/NM-sparsity/NM-sparsity.
[ { "created": "Mon, 8 Feb 2021 05:55:47 GMT", "version": "v1" }, { "created": "Sun, 18 Apr 2021 10:18:00 GMT", "version": "v2" } ]
2021-04-20
[ [ "Zhou", "Aojun", "" ], [ "Ma", "Yukun", "" ], [ "Zhu", "Junnan", "" ], [ "Liu", "Jianbo", "" ], [ "Zhang", "Zhijie", "" ], [ "Yuan", "Kun", "" ], [ "Sun", "Wenxiu", "" ], [ "Li", "Hongsheng", "" ] ]
Sparsity in Deep Neural Networks (DNNs) has been widely studied to compress and accelerate the models on resource-constrained environments. It can be generally categorized into unstructured fine-grained sparsity that zeroes out multiple individual weights distributed across the neural network, and structured coarse-grained sparsity which prunes blocks of sub-networks of a neural network. Fine-grained sparsity can achieve a high compression ratio but is not hardware friendly and hence receives limited speed gains. On the other hand, coarse-grained sparsity cannot concurrently achieve both apparent acceleration on modern GPUs and decent performance. In this paper, we are the first to study training from scratch an N:M fine-grained structured sparse network, which can maintain the advantages of both unstructured fine-grained sparsity and structured coarse-grained sparsity simultaneously on specifically designed GPUs. Specifically, a 2:4 sparse network could achieve 2x speed-up without performance drop on Nvidia A100 GPUs. Furthermore, we propose a novel and effective ingredient, sparse-refined straight-through estimator (SR-STE), to alleviate the negative influence of the approximated gradients computed by vanilla STE during optimization. We also define a metric, Sparse Architecture Divergence (SAD), to measure the sparse network's topology change during the training process. Finally, We justify SR-STE's advantages with SAD and demonstrate the effectiveness of SR-STE by performing comprehensive experiments on various tasks. Source codes and models are available at https://github.com/NM-sparsity/NM-sparsity.
cs/9908001
Zvi Marx
Zvika Marx (1 and 2), Ido Dagan (1), Eli Shamir (2) ((1) Bar-Ilan University, (2) The Hebrew University of Jerusalem)
Detecting Sub-Topic Correspondence through Bipartite Term Clustering
html with 3 gif figures; generated from 7 pages MS-Word file
Proceedings of ACL'99 Workshop on Unsupervised Learning in Natural Language Processing, 1999, pp 45-51
null
null
cs.CL
null
This paper addresses a novel task of detecting sub-topic correspondence in a pair of text fragments, enhancing common notions of text similarity. This task is addressed by coupling corresponding term subsets through bipartite clustering. The paper presents a cost-based clustering scheme and compares it with a bipartite version of the single-link method, providing illustrating results.
[ { "created": "Sun, 1 Aug 1999 14:02:57 GMT", "version": "v1" } ]
2007-05-23
[ [ "Marx", "Zvika", "", "1 and 2" ], [ "Dagan", "Ido", "" ], [ "Shamir", "Eli", "" ] ]
This paper addresses a novel task of detecting sub-topic correspondence in a pair of text fragments, enhancing common notions of text similarity. This task is addressed by coupling corresponding term subsets through bipartite clustering. The paper presents a cost-based clustering scheme and compares it with a bipartite version of the single-link method, providing illustrating results.
1708.01771
Rongxiang Weng
Rongxiang Weng, Shujian Huang, Zaixiang Zheng, Xinyu Dai and Jiajun Chen
Neural Machine Translation with Word Predictions
Accepted at EMNLP2017
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the encoder-decoder architecture for neural machine translation (NMT), the hidden states of the recurrent structures in the encoder and decoder carry the crucial information about the sentence.These vectors are generated by parameters which are updated by back-propagation of translation errors through time. We argue that propagating errors through the end-to-end recurrent structures are not a direct way of control the hidden vectors. In this paper, we propose to use word predictions as a mechanism for direct supervision. More specifically, we require these vectors to be able to predict the vocabulary in target sentence. Our simple mechanism ensures better representations in the encoder and decoder without using any extra data or annotation. It is also helpful in reducing the target side vocabulary and improving the decoding efficiency. Experiments on Chinese-English and German-English machine translation tasks show BLEU improvements by 4.53 and 1.3, respectively
[ { "created": "Sat, 5 Aug 2017 13:38:10 GMT", "version": "v1" } ]
2017-08-08
[ [ "Weng", "Rongxiang", "" ], [ "Huang", "Shujian", "" ], [ "Zheng", "Zaixiang", "" ], [ "Dai", "Xinyu", "" ], [ "Chen", "Jiajun", "" ] ]
In the encoder-decoder architecture for neural machine translation (NMT), the hidden states of the recurrent structures in the encoder and decoder carry the crucial information about the sentence.These vectors are generated by parameters which are updated by back-propagation of translation errors through time. We argue that propagating errors through the end-to-end recurrent structures are not a direct way of control the hidden vectors. In this paper, we propose to use word predictions as a mechanism for direct supervision. More specifically, we require these vectors to be able to predict the vocabulary in target sentence. Our simple mechanism ensures better representations in the encoder and decoder without using any extra data or annotation. It is also helpful in reducing the target side vocabulary and improving the decoding efficiency. Experiments on Chinese-English and German-English machine translation tasks show BLEU improvements by 4.53 and 1.3, respectively
2007.13850
Pankaj Khatiwada
Pankaj Khatiwada, Hari Bhusal, Ayan Chatterjee, Martin W. Gerdess
A Proposed Access Control-Based Privacy Preservation Model to Share Healthcare Data in Cloud
null
null
null
null
cs.CR cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Healthcare data in cloud computing facilitates the treatment of patients efficiently by sharing information about personal health data between the healthcare providers for medical consultation. Furthermore, retaining the confidentiality of data and patients' identity is a another challenging task. This paper presents the concept of an access control-based (AC) privacy preservation model for the mutual authentication of users and data owners in the proposed digital system. The proposed model offers a high-security guarantee and high efficiency. The proposed digital system consists of four different entities, user, data owner, cloud server, and key generation center (KGC). This approach makes the system more robust and highly secure, which has been verified with multiple scenarios. Besides, the proposed model consisted of the setup phase, key generation phase, encryption phase, validation phase, access control phase, and data sharing phase. The setup phases are run by the data owner, which takes input as a security parameter and generates the system master key and security parameter. Then, in the key generation phase, the private key is generated by KGC and is stored in the cloud server. After that, the generated private key is encrypted. Then, the session key is generated by KGC and granted to the user and cloud server for storing, and then, the results are verified in the validation phase using validation messages. Finally, the data is shared with the user and decrypted at the user-end. The proposed model outperforms other methods with a maximal genuine data rate of 0.91.
[ { "created": "Mon, 27 Jul 2020 20:32:51 GMT", "version": "v1" } ]
2020-07-29
[ [ "Khatiwada", "Pankaj", "" ], [ "Bhusal", "Hari", "" ], [ "Chatterjee", "Ayan", "" ], [ "Gerdess", "Martin W.", "" ] ]
Healthcare data in cloud computing facilitates the treatment of patients efficiently by sharing information about personal health data between the healthcare providers for medical consultation. Furthermore, retaining the confidentiality of data and patients' identity is a another challenging task. This paper presents the concept of an access control-based (AC) privacy preservation model for the mutual authentication of users and data owners in the proposed digital system. The proposed model offers a high-security guarantee and high efficiency. The proposed digital system consists of four different entities, user, data owner, cloud server, and key generation center (KGC). This approach makes the system more robust and highly secure, which has been verified with multiple scenarios. Besides, the proposed model consisted of the setup phase, key generation phase, encryption phase, validation phase, access control phase, and data sharing phase. The setup phases are run by the data owner, which takes input as a security parameter and generates the system master key and security parameter. Then, in the key generation phase, the private key is generated by KGC and is stored in the cloud server. After that, the generated private key is encrypted. Then, the session key is generated by KGC and granted to the user and cloud server for storing, and then, the results are verified in the validation phase using validation messages. Finally, the data is shared with the user and decrypted at the user-end. The proposed model outperforms other methods with a maximal genuine data rate of 0.91.
2311.05221
Sven Sickert
Tim B\"uchner and Sven Sickert and Gerd Fabian Volk and Christoph Anders and Orlando Guntinas-Lichius and Joachim Denzler
Let's Get the FACS Straight -- Reconstructing Obstructed Facial Features
VISAPP 2023 paper
null
10.5220/0011619900003417
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The human face is one of the most crucial parts in interhuman communication. Even when parts of the face are hidden or obstructed the underlying facial movements can be understood. Machine learning approaches often fail in that regard due to the complexity of the facial structures. To alleviate this problem a common approach is to fine-tune a model for such a specific application. However, this is computational intensive and might have to be repeated for each desired analysis task. In this paper, we propose to reconstruct obstructed facial parts to avoid the task of repeated fine-tuning. As a result, existing facial analysis methods can be used without further changes with respect to the data. In our approach, the restoration of facial features is interpreted as a style transfer task between different recording setups. By using the CycleGAN architecture the requirement of matched pairs, which is often hard to fullfill, can be eliminated. To proof the viability of our approach, we compare our reconstructions with real unobstructed recordings. We created a novel data set in which 36 test subjects were recorded both with and without 62 surface electromyography sensors attached to their faces. In our evaluation, we feature typical facial analysis tasks, like the computation of Facial Action Units and the detection of emotions. To further assess the quality of the restoration, we also compare perceptional distances. We can show, that scores similar to the videos without obstructing sensors can be achieved.
[ { "created": "Thu, 9 Nov 2023 09:09:20 GMT", "version": "v1" }, { "created": "Fri, 10 Nov 2023 07:38:33 GMT", "version": "v2" } ]
2024-02-14
[ [ "Büchner", "Tim", "" ], [ "Sickert", "Sven", "" ], [ "Volk", "Gerd Fabian", "" ], [ "Anders", "Christoph", "" ], [ "Guntinas-Lichius", "Orlando", "" ], [ "Denzler", "Joachim", "" ] ]
The human face is one of the most crucial parts in interhuman communication. Even when parts of the face are hidden or obstructed the underlying facial movements can be understood. Machine learning approaches often fail in that regard due to the complexity of the facial structures. To alleviate this problem a common approach is to fine-tune a model for such a specific application. However, this is computational intensive and might have to be repeated for each desired analysis task. In this paper, we propose to reconstruct obstructed facial parts to avoid the task of repeated fine-tuning. As a result, existing facial analysis methods can be used without further changes with respect to the data. In our approach, the restoration of facial features is interpreted as a style transfer task between different recording setups. By using the CycleGAN architecture the requirement of matched pairs, which is often hard to fullfill, can be eliminated. To proof the viability of our approach, we compare our reconstructions with real unobstructed recordings. We created a novel data set in which 36 test subjects were recorded both with and without 62 surface electromyography sensors attached to their faces. In our evaluation, we feature typical facial analysis tasks, like the computation of Facial Action Units and the detection of emotions. To further assess the quality of the restoration, we also compare perceptional distances. We can show, that scores similar to the videos without obstructing sensors can be achieved.
2311.07780
Rui Duan
Rui Duan, Zhe Qu, Leah Ding, Yao Liu, Zhuo Lu
Parrot-Trained Adversarial Examples: Pushing the Practicality of Black-Box Audio Attacks against Speaker Recognition Models
null
null
null
null
cs.SD cs.AI eess.AS
http://creativecommons.org/publicdomain/zero/1.0/
Audio adversarial examples (AEs) have posed significant security challenges to real-world speaker recognition systems. Most black-box attacks still require certain information from the speaker recognition model to be effective (e.g., keeping probing and requiring the knowledge of similarity scores). This work aims to push the practicality of the black-box attacks by minimizing the attacker's knowledge about a target speaker recognition model. Although it is not feasible for an attacker to succeed with completely zero knowledge, we assume that the attacker only knows a short (or a few seconds) speech sample of a target speaker. Without any probing to gain further knowledge about the target model, we propose a new mechanism, called parrot training, to generate AEs against the target model. Motivated by recent advancements in voice conversion (VC), we propose to use the one short sentence knowledge to generate more synthetic speech samples that sound like the target speaker, called parrot speech. Then, we use these parrot speech samples to train a parrot-trained(PT) surrogate model for the attacker. Under a joint transferability and perception framework, we investigate different ways to generate AEs on the PT model (called PT-AEs) to ensure the PT-AEs can be generated with high transferability to a black-box target model with good human perceptual quality. Real-world experiments show that the resultant PT-AEs achieve the attack success rates of 45.8% - 80.8% against the open-source models in the digital-line scenario and 47.9% - 58.3% against smart devices, including Apple HomePod (Siri), Amazon Echo, and Google Home, in the over-the-air scenario.
[ { "created": "Mon, 13 Nov 2023 22:12:19 GMT", "version": "v1" }, { "created": "Fri, 17 Nov 2023 21:34:33 GMT", "version": "v2" } ]
2023-11-21
[ [ "Duan", "Rui", "" ], [ "Qu", "Zhe", "" ], [ "Ding", "Leah", "" ], [ "Liu", "Yao", "" ], [ "Lu", "Zhuo", "" ] ]
Audio adversarial examples (AEs) have posed significant security challenges to real-world speaker recognition systems. Most black-box attacks still require certain information from the speaker recognition model to be effective (e.g., keeping probing and requiring the knowledge of similarity scores). This work aims to push the practicality of the black-box attacks by minimizing the attacker's knowledge about a target speaker recognition model. Although it is not feasible for an attacker to succeed with completely zero knowledge, we assume that the attacker only knows a short (or a few seconds) speech sample of a target speaker. Without any probing to gain further knowledge about the target model, we propose a new mechanism, called parrot training, to generate AEs against the target model. Motivated by recent advancements in voice conversion (VC), we propose to use the one short sentence knowledge to generate more synthetic speech samples that sound like the target speaker, called parrot speech. Then, we use these parrot speech samples to train a parrot-trained(PT) surrogate model for the attacker. Under a joint transferability and perception framework, we investigate different ways to generate AEs on the PT model (called PT-AEs) to ensure the PT-AEs can be generated with high transferability to a black-box target model with good human perceptual quality. Real-world experiments show that the resultant PT-AEs achieve the attack success rates of 45.8% - 80.8% against the open-source models in the digital-line scenario and 47.9% - 58.3% against smart devices, including Apple HomePod (Siri), Amazon Echo, and Google Home, in the over-the-air scenario.
2208.00344
Leena Mathur
Leena Mathur, Ralph Adolphs, Maja J Matari\'c
Towards Intercultural Affect Recognition: Audio-Visual Affect Recognition in the Wild Across Six Cultures
Accepted at IEEE International Conference on Automatic Face and Gesture Recognition (FG 2023), publication and presentation at refereed IEEE workshop
null
null
null
cs.CV cs.HC cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In our multicultural world, affect-aware AI systems that support humans need the ability to perceive affect across variations in emotion expression patterns across cultures. These systems must perform well in cultural contexts without annotated affect datasets available for training models. A standard assumption in affective computing is that affect recognition models trained and used within the same culture (intracultural) will perform better than models trained on one culture and used on different cultures (intercultural). We test this assumption and present the first systematic study of intercultural affect recognition models using videos of real-world dyadic interactions from six cultures. We develop an attention-based feature selection approach under temporal causal discovery to identify behavioral cues that can be leveraged in intercultural affect recognition models. Across all six cultures, our findings demonstrate that intercultural affect recognition models were as effective or more effective than intracultural models. We identify and contribute useful behavioral features for intercultural affect recognition; facial features from the visual modality were more useful than the audio modality in this study's context. Our paper presents a proof-of-concept and motivation for the future development of intercultural affect recognition systems, especially those deployed in low-resource situations without annotated data.
[ { "created": "Sun, 31 Jul 2022 02:39:17 GMT", "version": "v1" }, { "created": "Sat, 17 Sep 2022 21:14:00 GMT", "version": "v2" }, { "created": "Mon, 31 Oct 2022 08:28:04 GMT", "version": "v3" } ]
2022-11-01
[ [ "Mathur", "Leena", "" ], [ "Adolphs", "Ralph", "" ], [ "Matarić", "Maja J", "" ] ]
In our multicultural world, affect-aware AI systems that support humans need the ability to perceive affect across variations in emotion expression patterns across cultures. These systems must perform well in cultural contexts without annotated affect datasets available for training models. A standard assumption in affective computing is that affect recognition models trained and used within the same culture (intracultural) will perform better than models trained on one culture and used on different cultures (intercultural). We test this assumption and present the first systematic study of intercultural affect recognition models using videos of real-world dyadic interactions from six cultures. We develop an attention-based feature selection approach under temporal causal discovery to identify behavioral cues that can be leveraged in intercultural affect recognition models. Across all six cultures, our findings demonstrate that intercultural affect recognition models were as effective or more effective than intracultural models. We identify and contribute useful behavioral features for intercultural affect recognition; facial features from the visual modality were more useful than the audio modality in this study's context. Our paper presents a proof-of-concept and motivation for the future development of intercultural affect recognition systems, especially those deployed in low-resource situations without annotated data.
2305.14208
Anmol Kabra
Anmol Kabra, Ethan R. Elenberg
Domain Private Transformers for Multi-Domain Dialog Systems
Accepted to Findings of EMNLP 2023 (short paper). Code available at https://github.com/asappresearch/domain-private-transformers
null
null
null
cs.CL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Large, general purpose language models have demonstrated impressive performance across many different conversational domains. While multi-domain language models achieve low overall perplexity, their outputs are not guaranteed to stay within the domain of a given input prompt. This paper proposes domain privacy as a novel way to quantify how likely a conditional language model will leak across domains. We also develop policy functions based on token-level domain classification, and propose an efficient fine-tuning method to improve the trained model's domain privacy. Experiments on membership inference attacks show that our proposed method has comparable resiliency to methods adapted from recent literature on differentially private language models.
[ { "created": "Tue, 23 May 2023 16:27:12 GMT", "version": "v1" }, { "created": "Thu, 7 Dec 2023 19:46:09 GMT", "version": "v2" } ]
2023-12-11
[ [ "Kabra", "Anmol", "" ], [ "Elenberg", "Ethan R.", "" ] ]
Large, general purpose language models have demonstrated impressive performance across many different conversational domains. While multi-domain language models achieve low overall perplexity, their outputs are not guaranteed to stay within the domain of a given input prompt. This paper proposes domain privacy as a novel way to quantify how likely a conditional language model will leak across domains. We also develop policy functions based on token-level domain classification, and propose an efficient fine-tuning method to improve the trained model's domain privacy. Experiments on membership inference attacks show that our proposed method has comparable resiliency to methods adapted from recent literature on differentially private language models.
1810.03145
Ruohan Wang
Ruohan Wang and Pierluigi V. Amadori and Yiannis Demiris
Real-Time Workload Classification during Driving using HyperNetworks
2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2018)
null
null
null
cs.HC cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Classifying human cognitive states from behavioral and physiological signals is a challenging problem with important applications in robotics. The problem is challenging due to the data variability among individual users, and sensor artefacts. In this work, we propose an end-to-end framework for real-time cognitive workload classification with mixture Hyper Long Short Term Memory Networks, a novel variant of HyperNetworks. Evaluating the proposed approach on an eye-gaze pattern dataset collected from simulated driving scenarios of different cognitive demands, we show that the proposed framework outperforms previous baseline methods and achieves 83.9\% precision and 87.8\% recall during test. We also demonstrate the merit of our proposed architecture by showing improved performance over other LSTM-based methods.
[ { "created": "Sun, 7 Oct 2018 13:57:25 GMT", "version": "v1" } ]
2018-10-09
[ [ "Wang", "Ruohan", "" ], [ "Amadori", "Pierluigi V.", "" ], [ "Demiris", "Yiannis", "" ] ]
Classifying human cognitive states from behavioral and physiological signals is a challenging problem with important applications in robotics. The problem is challenging due to the data variability among individual users, and sensor artefacts. In this work, we propose an end-to-end framework for real-time cognitive workload classification with mixture Hyper Long Short Term Memory Networks, a novel variant of HyperNetworks. Evaluating the proposed approach on an eye-gaze pattern dataset collected from simulated driving scenarios of different cognitive demands, we show that the proposed framework outperforms previous baseline methods and achieves 83.9\% precision and 87.8\% recall during test. We also demonstrate the merit of our proposed architecture by showing improved performance over other LSTM-based methods.
1310.6901
Trang Cao
Trang Cao Minh, Boris Bellalta, Simon Oechsner, Ruizhi Liao and Miquel Oliver
Managing Heterogeneous WSNs in Smart Cities: Challenges and Requirements
null
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The dramatic advances in wireless communications and electronics have enabled the development of Wireless Sensor Networks (WSNs). WSNs consist of many affordable and portable sensor nodes for collecting data from the environment. In this article, we address management requirements of WSNs through presenting some key management scenarios in the Smart Cities context, such as intelligent transportation systems, smart grids and smart buildings. The limited resources and heterogeneous characteristics of WSNs pose new challenges in network management, which include the presence of various faults, the difficulty in replacing and repairing a large number of sensor nodes, the existence of an uncertain topology, and the resource allocation. To cope with these challenges, we first discuss advantages and disadvantages of centralized and distributed management approaches and then discuss the benefit of the multilevel management schema. Next, we present in detail the specific features for a WSN management system such as lightweight, self-detection, self-configuration, sharing infrastructure, service monitoring, plug and play, context awareness and interoperability. Finally, we present the required mechanisms for some basic management functions.
[ { "created": "Fri, 25 Oct 2013 13:08:27 GMT", "version": "v1" }, { "created": "Mon, 28 Oct 2013 10:47:02 GMT", "version": "v2" } ]
2013-10-29
[ [ "Minh", "Trang Cao", "" ], [ "Bellalta", "Boris", "" ], [ "Oechsner", "Simon", "" ], [ "Liao", "Ruizhi", "" ], [ "Oliver", "Miquel", "" ] ]
The dramatic advances in wireless communications and electronics have enabled the development of Wireless Sensor Networks (WSNs). WSNs consist of many affordable and portable sensor nodes for collecting data from the environment. In this article, we address management requirements of WSNs through presenting some key management scenarios in the Smart Cities context, such as intelligent transportation systems, smart grids and smart buildings. The limited resources and heterogeneous characteristics of WSNs pose new challenges in network management, which include the presence of various faults, the difficulty in replacing and repairing a large number of sensor nodes, the existence of an uncertain topology, and the resource allocation. To cope with these challenges, we first discuss advantages and disadvantages of centralized and distributed management approaches and then discuss the benefit of the multilevel management schema. Next, we present in detail the specific features for a WSN management system such as lightweight, self-detection, self-configuration, sharing infrastructure, service monitoring, plug and play, context awareness and interoperability. Finally, we present the required mechanisms for some basic management functions.
2402.11958
Anqi Li
Anqi Li, Yu Lu, Nirui Song, Shuai Zhang, Lizhi Ma, Zhenzhong Lan
Automatic Evaluation for Mental Health Counseling using LLMs
21 pages, 4 figures
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
High-quality psychological counseling is crucial for mental health worldwide, and timely evaluation is vital for ensuring its effectiveness. However, obtaining professional evaluation for each counseling session is expensive and challenging. Existing methods that rely on self or third-party manual reports to assess the quality of counseling suffer from subjective biases and limitations of time-consuming. To address above challenges, this paper proposes an innovative and efficient automatic approach using large language models (LLMs) to evaluate the working alliance in counseling conversations. We collected a comprehensive counseling dataset and conducted multiple third-party evaluations based on therapeutic relationship theory. Our LLM-based evaluation, combined with our guidelines, shows high agreement with human evaluations and provides valuable insights into counseling scripts. This highlights the potential of LLMs as supervisory tools for psychotherapists. By integrating LLMs into the evaluation process, our approach offers a cost-effective and dependable means of assessing counseling quality, enhancing overall effectiveness.
[ { "created": "Mon, 19 Feb 2024 09:00:10 GMT", "version": "v1" } ]
2024-02-20
[ [ "Li", "Anqi", "" ], [ "Lu", "Yu", "" ], [ "Song", "Nirui", "" ], [ "Zhang", "Shuai", "" ], [ "Ma", "Lizhi", "" ], [ "Lan", "Zhenzhong", "" ] ]
High-quality psychological counseling is crucial for mental health worldwide, and timely evaluation is vital for ensuring its effectiveness. However, obtaining professional evaluation for each counseling session is expensive and challenging. Existing methods that rely on self or third-party manual reports to assess the quality of counseling suffer from subjective biases and limitations of time-consuming. To address above challenges, this paper proposes an innovative and efficient automatic approach using large language models (LLMs) to evaluate the working alliance in counseling conversations. We collected a comprehensive counseling dataset and conducted multiple third-party evaluations based on therapeutic relationship theory. Our LLM-based evaluation, combined with our guidelines, shows high agreement with human evaluations and provides valuable insights into counseling scripts. This highlights the potential of LLMs as supervisory tools for psychotherapists. By integrating LLMs into the evaluation process, our approach offers a cost-effective and dependable means of assessing counseling quality, enhancing overall effectiveness.
2103.05187
Mingjie Sun
Mingjie Sun, Jimin Xiao, Eng Gee Lim
Iterative Shrinking for Referring Expression Grounding Using Deep Reinforcement Learning
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
In this paper, we are tackling the proposal-free referring expression grounding task, aiming at localizing the target object according to a query sentence, without relying on off-the-shelf object proposals. Existing proposal-free methods employ a query-image matching branch to select the highest-score point in the image feature map as the target box center, with its width and height predicted by another branch. Such methods, however, fail to utilize the contextual relation between the target and reference objects, and lack interpretability on its reasoning procedure. To solve these problems, we propose an iterative shrinking mechanism to localize the target, where the shrinking direction is decided by a reinforcement learning agent, with all contents within the current image patch comprehensively considered. Beside, the sequential shrinking process enables to demonstrate the reasoning about how to iteratively find the target. Experiments show that the proposed method boosts the accuracy by 4.32% against the previous state-of-the-art (SOTA) method on the RefCOCOg dataset, where query sentences are long and complex, with many targets referred by other reference objects.
[ { "created": "Tue, 9 Mar 2021 02:36:45 GMT", "version": "v1" } ]
2021-03-10
[ [ "Sun", "Mingjie", "" ], [ "Xiao", "Jimin", "" ], [ "Lim", "Eng Gee", "" ] ]
In this paper, we are tackling the proposal-free referring expression grounding task, aiming at localizing the target object according to a query sentence, without relying on off-the-shelf object proposals. Existing proposal-free methods employ a query-image matching branch to select the highest-score point in the image feature map as the target box center, with its width and height predicted by another branch. Such methods, however, fail to utilize the contextual relation between the target and reference objects, and lack interpretability on its reasoning procedure. To solve these problems, we propose an iterative shrinking mechanism to localize the target, where the shrinking direction is decided by a reinforcement learning agent, with all contents within the current image patch comprehensively considered. Beside, the sequential shrinking process enables to demonstrate the reasoning about how to iteratively find the target. Experiments show that the proposed method boosts the accuracy by 4.32% against the previous state-of-the-art (SOTA) method on the RefCOCOg dataset, where query sentences are long and complex, with many targets referred by other reference objects.
2105.03811
Farzaneh Rajabi
Farzaneh Rajabi, Jack Siyuan He
Click-Through Rate Prediction Using Graph Neural Networks and Online Learning
null
null
null
null
cs.IR cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
Recommendation systems have been extensively studied by many literature in the past and are ubiquitous in online advertisement, shopping industry/e-commerce, query suggestions in search engines, and friend recommendation in social networks. Moreover, restaurant/music/product/movie/news/app recommendations are only a few of the applications of a recommender system. A small percent improvement on the CTR prediction accuracy has been mentioned to add millions of dollars of revenue to the advertisement industry. Click-Through-Rate (CTR) prediction is a special version of recommender system in which the goal is predicting whether or not a user is going to click on a recommended item. A content-based recommendation approach takes into account the past history of the user's behavior, i.e. the recommended products and the users reaction to them. So, a personalized model that recommends the right item to the right user at the right time is the key to building such a model. On the other hand, the so-called collaborative filtering approach incorporates the click history of the users who are very similar to a particular user, thereby helping the recommender to come up with a more confident prediction for that particular user by leveraging the wider knowledge of users who share their taste in a connected network of users. In this project, we are interested in building a CTR predictor using Graph Neural Networks complemented by an online learning algorithm that models such dynamic interactions. By framing the problem as a binary classification task, we have evaluated this system both on the offline models (GNN, Deep Factorization Machines) with test-AUC of 0.7417 and on the online learning model with test-AUC of 0.7585 using a sub-sampled version of Criteo public dataset consisting of 10,000 data points.
[ { "created": "Sun, 9 May 2021 01:35:49 GMT", "version": "v1" } ]
2021-05-11
[ [ "Rajabi", "Farzaneh", "" ], [ "He", "Jack Siyuan", "" ] ]
Recommendation systems have been extensively studied by many literature in the past and are ubiquitous in online advertisement, shopping industry/e-commerce, query suggestions in search engines, and friend recommendation in social networks. Moreover, restaurant/music/product/movie/news/app recommendations are only a few of the applications of a recommender system. A small percent improvement on the CTR prediction accuracy has been mentioned to add millions of dollars of revenue to the advertisement industry. Click-Through-Rate (CTR) prediction is a special version of recommender system in which the goal is predicting whether or not a user is going to click on a recommended item. A content-based recommendation approach takes into account the past history of the user's behavior, i.e. the recommended products and the users reaction to them. So, a personalized model that recommends the right item to the right user at the right time is the key to building such a model. On the other hand, the so-called collaborative filtering approach incorporates the click history of the users who are very similar to a particular user, thereby helping the recommender to come up with a more confident prediction for that particular user by leveraging the wider knowledge of users who share their taste in a connected network of users. In this project, we are interested in building a CTR predictor using Graph Neural Networks complemented by an online learning algorithm that models such dynamic interactions. By framing the problem as a binary classification task, we have evaluated this system both on the offline models (GNN, Deep Factorization Machines) with test-AUC of 0.7417 and on the online learning model with test-AUC of 0.7585 using a sub-sampled version of Criteo public dataset consisting of 10,000 data points.
2204.04392
Ningyu Zhang
Xiaozhuan Liang, Ningyu Zhang, Siyuan Cheng, Zhenru Zhang, Chuanqi Tan, Huajun Chen
Contrastive Demonstration Tuning for Pre-trained Language Models
Accepted to EMNLP 2022(Findings)
null
null
null
cs.CL cs.AI cs.IR cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Pretrained language models can be effectively stimulated by textual prompts or demonstrations, especially in low-data scenarios. Recent works have focused on automatically searching discrete or continuous prompts or optimized verbalizers, yet studies for the demonstration are still limited. Concretely, the demonstration examples are crucial for an excellent final performance of prompt-tuning. In this paper, we propose a novel pluggable, extensible, and efficient approach named contrastive demonstration tuning, which is free of demonstration sampling. Furthermore, the proposed approach can be: (i) Plugged into any previous prompt-tuning approaches; (ii) Extended to widespread classification tasks with a large number of categories. Experimental results on 16 datasets illustrate that our method integrated with previous approaches LM-BFF and P-tuning can yield better performance. Code is available in https://github.com/zjunlp/PromptKG/tree/main/research/Demo-Tuning.
[ { "created": "Sat, 9 Apr 2022 05:30:48 GMT", "version": "v1" }, { "created": "Mon, 18 Apr 2022 14:42:14 GMT", "version": "v2" }, { "created": "Wed, 19 Oct 2022 15:54:59 GMT", "version": "v3" }, { "created": "Tue, 19 Sep 2023 12:27:36 GMT", "version": "v4" } ]
2023-09-20
[ [ "Liang", "Xiaozhuan", "" ], [ "Zhang", "Ningyu", "" ], [ "Cheng", "Siyuan", "" ], [ "Zhang", "Zhenru", "" ], [ "Tan", "Chuanqi", "" ], [ "Chen", "Huajun", "" ] ]
Pretrained language models can be effectively stimulated by textual prompts or demonstrations, especially in low-data scenarios. Recent works have focused on automatically searching discrete or continuous prompts or optimized verbalizers, yet studies for the demonstration are still limited. Concretely, the demonstration examples are crucial for an excellent final performance of prompt-tuning. In this paper, we propose a novel pluggable, extensible, and efficient approach named contrastive demonstration tuning, which is free of demonstration sampling. Furthermore, the proposed approach can be: (i) Plugged into any previous prompt-tuning approaches; (ii) Extended to widespread classification tasks with a large number of categories. Experimental results on 16 datasets illustrate that our method integrated with previous approaches LM-BFF and P-tuning can yield better performance. Code is available in https://github.com/zjunlp/PromptKG/tree/main/research/Demo-Tuning.
2012.13823
Raphael Memmesheimer
Raphael Memmesheimer, Simon H\"aring, Nick Theisen, Dietrich Paulus
Skeleton-DML: Deep Metric Learning for Skeleton-Based One-Shot Action Recognition
8 pages, 8 figures, 4 tables
null
null
null
cs.CV cs.AI cs.RO
http://creativecommons.org/licenses/by-nc-sa/4.0/
One-shot action recognition allows the recognition of human-performed actions with only a single training example. This can influence human-robot-interaction positively by enabling the robot to react to previously unseen behaviour. We formulate the one-shot action recognition problem as a deep metric learning problem and propose a novel image-based skeleton representation that performs well in a metric learning setting. Therefore, we train a model that projects the image representations into an embedding space. In embedding space the similar actions have a low euclidean distance while dissimilar actions have a higher distance. The one-shot action recognition problem becomes a nearest-neighbor search in a set of activity reference samples. We evaluate the performance of our proposed representation against a variety of other skeleton-based image representations. In addition, we present an ablation study that shows the influence of different embedding vector sizes, losses and augmentation. Our approach lifts the state-of-the-art by 3.3% for the one-shot action recognition protocol on the NTU RGB+D 120 dataset under a comparable training setup. With additional augmentation our result improved over 7.7%.
[ { "created": "Sat, 26 Dec 2020 22:31:11 GMT", "version": "v1" }, { "created": "Mon, 8 Mar 2021 14:33:17 GMT", "version": "v2" } ]
2021-03-09
[ [ "Memmesheimer", "Raphael", "" ], [ "Häring", "Simon", "" ], [ "Theisen", "Nick", "" ], [ "Paulus", "Dietrich", "" ] ]
One-shot action recognition allows the recognition of human-performed actions with only a single training example. This can influence human-robot-interaction positively by enabling the robot to react to previously unseen behaviour. We formulate the one-shot action recognition problem as a deep metric learning problem and propose a novel image-based skeleton representation that performs well in a metric learning setting. Therefore, we train a model that projects the image representations into an embedding space. In embedding space the similar actions have a low euclidean distance while dissimilar actions have a higher distance. The one-shot action recognition problem becomes a nearest-neighbor search in a set of activity reference samples. We evaluate the performance of our proposed representation against a variety of other skeleton-based image representations. In addition, we present an ablation study that shows the influence of different embedding vector sizes, losses and augmentation. Our approach lifts the state-of-the-art by 3.3% for the one-shot action recognition protocol on the NTU RGB+D 120 dataset under a comparable training setup. With additional augmentation our result improved over 7.7%.
1903.01240
Aran Sena Mr.
Aran Sena, Brendan Michael, Matthew Howard
Improving Task-Parameterised Movement Learning Generalisation with Frame-Weighted Trajectory Generation
8 pages, 6 figures, submitted to 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
null
null
null
cs.RO cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Learning from Demonstration depends on a robot learner generalising its learned model to unseen conditions, as it is not feasible for a person to provide a demonstration set that accounts for all possible variations in non-trivial tasks. While there are many learning methods that can handle interpolation of observed data effectively, extrapolation from observed data offers a much greater challenge. To address this problem of generalisation, this paper proposes a modified Task-Parameterised Gaussian Mixture Regression method that considers the relevance of task parameters during trajectory generation, as determined by variance in the data. The benefits of the proposed method are first explored using a simulated reaching task data set. Here it is shown that the proposed method offers far-reaching, low-error extrapolation abilities that are different in nature to existing learning methods. Data collected from novice users for a real-world manipulation task is then considered, where it is shown that the proposed method is able to effectively reduce grasping performance errors by ${\sim30\%}$ and extrapolate to unseen grasp targets under real-world conditions. These results indicate the proposed method serves to benefit novice users by placing less reliance on the user to provide high quality demonstration data sets.
[ { "created": "Mon, 4 Mar 2019 13:50:49 GMT", "version": "v1" } ]
2019-03-05
[ [ "Sena", "Aran", "" ], [ "Michael", "Brendan", "" ], [ "Howard", "Matthew", "" ] ]
Learning from Demonstration depends on a robot learner generalising its learned model to unseen conditions, as it is not feasible for a person to provide a demonstration set that accounts for all possible variations in non-trivial tasks. While there are many learning methods that can handle interpolation of observed data effectively, extrapolation from observed data offers a much greater challenge. To address this problem of generalisation, this paper proposes a modified Task-Parameterised Gaussian Mixture Regression method that considers the relevance of task parameters during trajectory generation, as determined by variance in the data. The benefits of the proposed method are first explored using a simulated reaching task data set. Here it is shown that the proposed method offers far-reaching, low-error extrapolation abilities that are different in nature to existing learning methods. Data collected from novice users for a real-world manipulation task is then considered, where it is shown that the proposed method is able to effectively reduce grasping performance errors by ${\sim30\%}$ and extrapolate to unseen grasp targets under real-world conditions. These results indicate the proposed method serves to benefit novice users by placing less reliance on the user to provide high quality demonstration data sets.
2405.18609
Otman Benchekroun
Otman Benchekroun, Kaixiang Xie, Hsueh-Ti Derek Liu, Eitan Grinspun, Sheldon Andrews, Victor Zordan
Actuators \`A La Mode: Modal Actuations for Soft Body Locomotion
15 pages, 14 figures
null
null
null
cs.GR
http://creativecommons.org/licenses/by/4.0/
Traditional character animation specializes in characters with a rigidly articulated skeleton and a bipedal/quadripedal morphology. This assumption simplifies many aspects for designing physically based animations, like locomotion, but comes with the price of excluding characters of arbitrary deformable geometries. To remedy this, our framework makes use of a spatio-temporal actuation subspace built off of the natural vibration modes of the character geometry. The resulting actuation is coupled to a reduced fast soft body simulation, allowing us to formulate a locomotion optimization problem that is tractable for a wide variety of high resolution deformable characters.
[ { "created": "Tue, 28 May 2024 21:39:29 GMT", "version": "v1" } ]
2024-05-30
[ [ "Benchekroun", "Otman", "" ], [ "Xie", "Kaixiang", "" ], [ "Liu", "Hsueh-Ti Derek", "" ], [ "Grinspun", "Eitan", "" ], [ "Andrews", "Sheldon", "" ], [ "Zordan", "Victor", "" ] ]
Traditional character animation specializes in characters with a rigidly articulated skeleton and a bipedal/quadripedal morphology. This assumption simplifies many aspects for designing physically based animations, like locomotion, but comes with the price of excluding characters of arbitrary deformable geometries. To remedy this, our framework makes use of a spatio-temporal actuation subspace built off of the natural vibration modes of the character geometry. The resulting actuation is coupled to a reduced fast soft body simulation, allowing us to formulate a locomotion optimization problem that is tractable for a wide variety of high resolution deformable characters.
2303.15847
Hiroki Nakano
Hiroki Nakano, Daiki Chiba, Takashi Koide, Naoki Fukushi, Takeshi Yagi, Takeo Hariu, Katsunari Yoshioka, Tsutomu Matsumoto
Canary in Twitter Mine: Collecting Phishing Reports from Experts and Non-experts
Accepted at the 18th International Conference on Availability, Reliability and Security (ARES 2023)
null
null
null
cs.CR cs.SI
http://creativecommons.org/licenses/by/4.0/
The rise in phishing attacks via e-mail and short message service (SMS) has not slowed down at all. The first thing we need to do to combat the ever-increasing number of phishing attacks is to collect and characterize more phishing cases that reach end users. Without understanding these characteristics, anti-phishing countermeasures cannot evolve. In this study, we propose an approach using Twitter as a new observation point to immediately collect and characterize phishing cases via e-mail and SMS that evade countermeasures and reach users. Specifically, we propose CrowdCanary, a system capable of structurally and accurately extracting phishing information (e.g., URLs and domains) from tweets about phishing by users who have actually discovered or encountered it. In our three months of live operation, CrowdCanary identified 35,432 phishing URLs out of 38,935 phishing reports, 31,960 (90.2%) of these phishing URLs were later detected by the anti-virus engine. We analyzed users who shared phishing threats by categorizing them into two groups: experts and non-experts. As a results, we discovered that CrowdCanary extracts non-expert report-specific information, like company brand name in tweets, phishing attack details from tweet images, and pre-redirect landing page information.
[ { "created": "Tue, 28 Mar 2023 09:38:37 GMT", "version": "v1" }, { "created": "Tue, 6 Jun 2023 05:30:12 GMT", "version": "v2" } ]
2023-06-07
[ [ "Nakano", "Hiroki", "" ], [ "Chiba", "Daiki", "" ], [ "Koide", "Takashi", "" ], [ "Fukushi", "Naoki", "" ], [ "Yagi", "Takeshi", "" ], [ "Hariu", "Takeo", "" ], [ "Yoshioka", "Katsunari", "" ], [ "Matsumoto", "Tsutomu", "" ] ]
The rise in phishing attacks via e-mail and short message service (SMS) has not slowed down at all. The first thing we need to do to combat the ever-increasing number of phishing attacks is to collect and characterize more phishing cases that reach end users. Without understanding these characteristics, anti-phishing countermeasures cannot evolve. In this study, we propose an approach using Twitter as a new observation point to immediately collect and characterize phishing cases via e-mail and SMS that evade countermeasures and reach users. Specifically, we propose CrowdCanary, a system capable of structurally and accurately extracting phishing information (e.g., URLs and domains) from tweets about phishing by users who have actually discovered or encountered it. In our three months of live operation, CrowdCanary identified 35,432 phishing URLs out of 38,935 phishing reports, 31,960 (90.2%) of these phishing URLs were later detected by the anti-virus engine. We analyzed users who shared phishing threats by categorizing them into two groups: experts and non-experts. As a results, we discovered that CrowdCanary extracts non-expert report-specific information, like company brand name in tweets, phishing attack details from tweet images, and pre-redirect landing page information.
2406.15765
Zhongzhi Yu
Zhongzhi Yu, Zheng Wang, Yonggan Fu, Huihong Shi, Khalid Shaikh, Yingyan Celine Lin
Unveiling and Harnessing Hidden Attention Sinks: Enhancing Large Language Models without Training through Attention Calibration
null
null
null
null
cs.LG cs.CL
http://creativecommons.org/licenses/by/4.0/
Attention is a fundamental component behind the remarkable achievements of large language models (LLMs). However, our current understanding of the attention mechanism, especially regarding how attention distributions are established, remains limited. Inspired by recent studies that explore the presence of attention sink in the initial token, which receives disproportionately large attention scores despite their lack of semantic importance, this work delves deeper into this phenomenon. We aim to provide a more profound understanding of the existence of attention sinks within LLMs and to uncover ways to enhance the achievable accuracy of LLMs by directly optimizing the attention distributions, without the need for weight finetuning. Specifically, this work begins with comprehensive visualizations of the attention distributions in LLMs during inference across various inputs and tasks. Based on these visualizations, to the best of our knowledge, we are the first to discover that (1) attention sinks occur not only at the start of sequences but also within later tokens of the input, and (2) not all attention sinks have a positive impact on the achievable accuracy of LLMs. Building upon our findings, we propose a training-free Attention Calibration Technique (ACT) that automatically optimizes the attention distributions on the fly during inference in an input-adaptive manner. Extensive experiments validate that ACT consistently enhances the accuracy of various LLMs across different applications. Specifically, ACT achieves an average improvement of up to 7.30% in accuracy across different datasets when applied to Llama-30B. Our code is available at https://github.com/GATECH-EIC/ACT.
[ { "created": "Sat, 22 Jun 2024 07:00:43 GMT", "version": "v1" } ]
2024-06-25
[ [ "Yu", "Zhongzhi", "" ], [ "Wang", "Zheng", "" ], [ "Fu", "Yonggan", "" ], [ "Shi", "Huihong", "" ], [ "Shaikh", "Khalid", "" ], [ "Lin", "Yingyan Celine", "" ] ]
Attention is a fundamental component behind the remarkable achievements of large language models (LLMs). However, our current understanding of the attention mechanism, especially regarding how attention distributions are established, remains limited. Inspired by recent studies that explore the presence of attention sink in the initial token, which receives disproportionately large attention scores despite their lack of semantic importance, this work delves deeper into this phenomenon. We aim to provide a more profound understanding of the existence of attention sinks within LLMs and to uncover ways to enhance the achievable accuracy of LLMs by directly optimizing the attention distributions, without the need for weight finetuning. Specifically, this work begins with comprehensive visualizations of the attention distributions in LLMs during inference across various inputs and tasks. Based on these visualizations, to the best of our knowledge, we are the first to discover that (1) attention sinks occur not only at the start of sequences but also within later tokens of the input, and (2) not all attention sinks have a positive impact on the achievable accuracy of LLMs. Building upon our findings, we propose a training-free Attention Calibration Technique (ACT) that automatically optimizes the attention distributions on the fly during inference in an input-adaptive manner. Extensive experiments validate that ACT consistently enhances the accuracy of various LLMs across different applications. Specifically, ACT achieves an average improvement of up to 7.30% in accuracy across different datasets when applied to Llama-30B. Our code is available at https://github.com/GATECH-EIC/ACT.
1806.08211
Pranjul Yadav
Marcelo Tallis, Pranjul Yadav
Reacting to Variations in Product Demand: An Application for Conversion Rate (CR) Prediction in Sponsored Search
null
null
null
null
cs.IR cs.LG cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In online internet advertising, machine learning models are widely used to compute the likelihood of a user engaging with product related advertisements. However, the performance of traditional machine learning models is often impacted due to variations in user and advertiser behavior. For example, search engine traffic for florists usually tends to peak around Valentine's day, Mother's day, etc. To overcome, this challenge, in this manuscript we propose three models which are able to incorporate the effects arising due to variations in product demand. The proposed models are a combination of product demand features, specialized data sampling methodologies and ensemble techniques. We demonstrate the performance of our proposed models on datasets obtained from a real-world setting. Our results show that the proposed models more accurately predict the outcome of users interactions with product related advertisements while simultaneously being robust to fluctuations in user and advertiser behaviors.
[ { "created": "Fri, 25 May 2018 23:15:00 GMT", "version": "v1" } ]
2018-06-22
[ [ "Tallis", "Marcelo", "" ], [ "Yadav", "Pranjul", "" ] ]
In online internet advertising, machine learning models are widely used to compute the likelihood of a user engaging with product related advertisements. However, the performance of traditional machine learning models is often impacted due to variations in user and advertiser behavior. For example, search engine traffic for florists usually tends to peak around Valentine's day, Mother's day, etc. To overcome, this challenge, in this manuscript we propose three models which are able to incorporate the effects arising due to variations in product demand. The proposed models are a combination of product demand features, specialized data sampling methodologies and ensemble techniques. We demonstrate the performance of our proposed models on datasets obtained from a real-world setting. Our results show that the proposed models more accurately predict the outcome of users interactions with product related advertisements while simultaneously being robust to fluctuations in user and advertiser behaviors.
1910.00697
Marthe Bonamy
Marthe Bonamy, Micha{\l} Pilipczuk
Graphs of bounded cliquewidth are polynomially $\chi$-bounded
20 pages
null
null
null
cs.DM math.CO
http://creativecommons.org/licenses/by/4.0/
We prove that if $\mathcal{C}$ is a hereditary class of graphs that is polynomially $\chi$-bounded, then the class of graphs that admit decompositions into pieces belonging to $\mathcal{C}$ along cuts of bounded rank is also polynomially $\chi$-bounded. In particular, this implies that for every positive integer $k$, the class of graphs of cliquewidth at most $k$ is polynomially $\chi$-bounded.
[ { "created": "Tue, 1 Oct 2019 22:15:31 GMT", "version": "v1" }, { "created": "Mon, 30 Dec 2019 12:34:44 GMT", "version": "v2" }, { "created": "Tue, 7 Jul 2020 15:49:48 GMT", "version": "v3" } ]
2020-07-08
[ [ "Bonamy", "Marthe", "" ], [ "Pilipczuk", "Michał", "" ] ]
We prove that if $\mathcal{C}$ is a hereditary class of graphs that is polynomially $\chi$-bounded, then the class of graphs that admit decompositions into pieces belonging to $\mathcal{C}$ along cuts of bounded rank is also polynomially $\chi$-bounded. In particular, this implies that for every positive integer $k$, the class of graphs of cliquewidth at most $k$ is polynomially $\chi$-bounded.
1804.08416
Zhaowei Zhu
Zhaowei Zhu, Ting Liu, Shengda Jin, and Xiliang Luo
Learn and Pick Right Nodes to Offload
8 pages, 4 figures
null
null
null
cs.NI cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Task offloading is a promising technology to exploit the benefits of fog computing. An effective task offloading strategy is needed to utilize the computational resources efficiently. In this paper, we endeavor to seek an online task offloading strategy to minimize the long-term latency. In particular, we formulate a stochastic programming problem, where the expectations of the system parameters change abruptly at unknown time instants. Meanwhile, we consider the fact that the queried nodes can only feed back the processing results after finishing the tasks. We then put forward an effective algorithm to solve this challenging stochastic programming under the non-stationary bandit model. We further prove that our proposed algorithm is asymptotically optimal in a non-stationary fog-enabled network. Numerical simulations are carried out to corroborate our designs.
[ { "created": "Fri, 20 Apr 2018 05:18:09 GMT", "version": "v1" }, { "created": "Tue, 24 Apr 2018 11:49:29 GMT", "version": "v2" } ]
2018-04-25
[ [ "Zhu", "Zhaowei", "" ], [ "Liu", "Ting", "" ], [ "Jin", "Shengda", "" ], [ "Luo", "Xiliang", "" ] ]
Task offloading is a promising technology to exploit the benefits of fog computing. An effective task offloading strategy is needed to utilize the computational resources efficiently. In this paper, we endeavor to seek an online task offloading strategy to minimize the long-term latency. In particular, we formulate a stochastic programming problem, where the expectations of the system parameters change abruptly at unknown time instants. Meanwhile, we consider the fact that the queried nodes can only feed back the processing results after finishing the tasks. We then put forward an effective algorithm to solve this challenging stochastic programming under the non-stationary bandit model. We further prove that our proposed algorithm is asymptotically optimal in a non-stationary fog-enabled network. Numerical simulations are carried out to corroborate our designs.
2001.07558
Tiphaine Viard
Tiphaine Viard, Thomas McLachlan, Hamidreza Ghader, Satoshi Sekine
Classifying Wikipedia in a fine-grained hierarchy: what graphs can contribute
7 pages
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Wikipedia is a huge opportunity for machine learning, being the largest semi-structured base of knowledge available. Because of this, many works examine its contents, and focus on structuring it in order to make it usable in learning tasks, for example by classifying it into an ontology. Beyond its textual contents, Wikipedia also displays a typical graph structure, where pages are linked together through citations. In this paper, we address the task of integrating graph (i.e. structure) information to classify Wikipedia into a fine-grained named entity ontology (NE), the Extended Named Entity hierarchy. To address this task, we first start by assessing the relevance of the graph structure for NE classification. We then explore two directions, one related to feature vectors using graph descriptors commonly used in large-scale network analysis, and one extending flat classification to a weighted model taking into account semantic similarity. We conduct at-scale practical experiments, on a manually labeled subset of 22,000 pages extracted from the Japanese Wikipedia. Our results show that integrating graph information succeeds at reducing sparsity of the input feature space, and yields classification results that are comparable or better than previous works.
[ { "created": "Tue, 21 Jan 2020 14:19:49 GMT", "version": "v1" }, { "created": "Wed, 22 Jan 2020 08:24:59 GMT", "version": "v2" } ]
2020-01-23
[ [ "Viard", "Tiphaine", "" ], [ "McLachlan", "Thomas", "" ], [ "Ghader", "Hamidreza", "" ], [ "Sekine", "Satoshi", "" ] ]
Wikipedia is a huge opportunity for machine learning, being the largest semi-structured base of knowledge available. Because of this, many works examine its contents, and focus on structuring it in order to make it usable in learning tasks, for example by classifying it into an ontology. Beyond its textual contents, Wikipedia also displays a typical graph structure, where pages are linked together through citations. In this paper, we address the task of integrating graph (i.e. structure) information to classify Wikipedia into a fine-grained named entity ontology (NE), the Extended Named Entity hierarchy. To address this task, we first start by assessing the relevance of the graph structure for NE classification. We then explore two directions, one related to feature vectors using graph descriptors commonly used in large-scale network analysis, and one extending flat classification to a weighted model taking into account semantic similarity. We conduct at-scale practical experiments, on a manually labeled subset of 22,000 pages extracted from the Japanese Wikipedia. Our results show that integrating graph information succeeds at reducing sparsity of the input feature space, and yields classification results that are comparable or better than previous works.
2108.12427
Jessica Whittlestone
Jess Whittlestone, Jack Clark
Why and How Governments Should Monitor AI Development
null
null
null
null
cs.CY cs.AI
http://creativecommons.org/licenses/by/4.0/
In this paper we outline a proposal for improving the governance of artificial intelligence (AI) by investing in government capacity to systematically measure and monitor the capabilities and impacts of AI systems. If adopted, this would give governments greater information about the AI ecosystem, equipping them to more effectively direct AI development and deployment in the most societally and economically beneficial directions. It would also create infrastructure that could rapidly identify potential threats or harms that could occur as a consequence of changes in the AI ecosystem, such as the emergence of strategically transformative capabilities, or the deployment of harmful systems. We begin by outlining the problem which motivates this proposal: in brief, traditional governance approaches struggle to keep pace with the speed of progress in AI. We then present our proposal for addressing this problem: governments must invest in measurement and monitoring infrastructure. We discuss this proposal in detail, outlining what specific things governments could focus on measuring and monitoring, and the kinds of benefits this would generate for policymaking. Finally, we outline some potential pilot projects and some considerations for implementing this in practice.
[ { "created": "Sat, 28 Aug 2021 19:41:22 GMT", "version": "v1" }, { "created": "Tue, 31 Aug 2021 12:49:31 GMT", "version": "v2" } ]
2021-09-01
[ [ "Whittlestone", "Jess", "" ], [ "Clark", "Jack", "" ] ]
In this paper we outline a proposal for improving the governance of artificial intelligence (AI) by investing in government capacity to systematically measure and monitor the capabilities and impacts of AI systems. If adopted, this would give governments greater information about the AI ecosystem, equipping them to more effectively direct AI development and deployment in the most societally and economically beneficial directions. It would also create infrastructure that could rapidly identify potential threats or harms that could occur as a consequence of changes in the AI ecosystem, such as the emergence of strategically transformative capabilities, or the deployment of harmful systems. We begin by outlining the problem which motivates this proposal: in brief, traditional governance approaches struggle to keep pace with the speed of progress in AI. We then present our proposal for addressing this problem: governments must invest in measurement and monitoring infrastructure. We discuss this proposal in detail, outlining what specific things governments could focus on measuring and monitoring, and the kinds of benefits this would generate for policymaking. Finally, we outline some potential pilot projects and some considerations for implementing this in practice.
2109.11731
Linlang Jiang
Linlang Jiang, Jingbo Zhou, Tong Xu, Yanyan Li, Hao Chen, Jizhou Huang, Hui Xiong
Adversarial Neural Trip Recommendation
null
null
null
null
cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
Trip recommender system, which targets at recommending a trip consisting of several ordered Points of Interest (POIs), has long been treated as an important application for many location-based services. Currently, most prior arts generate trips following pre-defined objectives based on constraint programming, which may fail to reflect the complex latent patterns hidden in the human mobility data. And most of these methods are usually difficult to respond in real time when the number of POIs is large. To that end, we propose an Adversarial Neural Trip Recommendation (ANT) framework to tackle the above challenges. First of all, we devise a novel attention-based encoder-decoder trip generator that can learn the correlations among POIs and generate well-designed trips under given constraints. Another novelty of ANT relies on an adversarial learning strategy integrating with reinforcement learning to guide the trip generator to produce high-quality trips. For this purpose, we introduce a discriminator, which distinguishes the generated trips from real-life trips taken by users, to provide reward signals to optimize the generator. Moreover, we devise a novel pre-train schema based on learning from demonstration, which speeds up the convergence to achieve a sufficient-and-efficient training process. Extensive experiments on four real-world datasets validate the effectiveness and efficiency of our proposed ANT framework, which demonstrates that ANT could remarkably outperform the state-of-the-art baselines with short response time.
[ { "created": "Fri, 24 Sep 2021 03:57:25 GMT", "version": "v1" } ]
2021-09-27
[ [ "Jiang", "Linlang", "" ], [ "Zhou", "Jingbo", "" ], [ "Xu", "Tong", "" ], [ "Li", "Yanyan", "" ], [ "Chen", "Hao", "" ], [ "Huang", "Jizhou", "" ], [ "Xiong", "Hui", "" ] ]
Trip recommender system, which targets at recommending a trip consisting of several ordered Points of Interest (POIs), has long been treated as an important application for many location-based services. Currently, most prior arts generate trips following pre-defined objectives based on constraint programming, which may fail to reflect the complex latent patterns hidden in the human mobility data. And most of these methods are usually difficult to respond in real time when the number of POIs is large. To that end, we propose an Adversarial Neural Trip Recommendation (ANT) framework to tackle the above challenges. First of all, we devise a novel attention-based encoder-decoder trip generator that can learn the correlations among POIs and generate well-designed trips under given constraints. Another novelty of ANT relies on an adversarial learning strategy integrating with reinforcement learning to guide the trip generator to produce high-quality trips. For this purpose, we introduce a discriminator, which distinguishes the generated trips from real-life trips taken by users, to provide reward signals to optimize the generator. Moreover, we devise a novel pre-train schema based on learning from demonstration, which speeds up the convergence to achieve a sufficient-and-efficient training process. Extensive experiments on four real-world datasets validate the effectiveness and efficiency of our proposed ANT framework, which demonstrates that ANT could remarkably outperform the state-of-the-art baselines with short response time.
1903.02810
Dorothee Henke
Christoph Buchheim and Dorothee Henke
The robust bilevel continuous knapsack problem with uncertain coefficients in the follower's objective
An extended version of Section 8 of v2 can be found at arXiv:2108.12303
Journal of Global Optimization 83(4), 803-824 (2022)
10.1007/s10898-021-01117-9
null
cs.DS cs.DM math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider a bilevel continuous knapsack problem where the leader controls the capacity of the knapsack and the follower chooses an optimal packing according to his own profits, which may differ from those of the leader. To this bilevel problem, we add uncertainty in a natural way, assuming that the leader does not have full knowledge about the follower's problem. More precisely, adopting the robust optimization approach and assuming that the follower's profits belong to a given uncertainty set, our aim is to compute a solution that optimizes the worst-case follower's reaction from the leader's perspective. By investigating the complexity of this problem with respect to different types of uncertainty sets, we make first steps towards better understanding the combination of bilevel optimization and robust combinatorial optimization. We show that the problem can be solved in polynomial time for both discrete and interval uncertainty, but that the same problem becomes NP-hard when each coefficient can independently assume only a finite number of values. In particular, this demonstrates that replacing uncertainty sets by their convex hulls may change the problem significantly, in contrast to the situation in classical single-level robust optimization. For general polytopal uncertainty, the problem again turns out to be NP-hard, and the same is true for ellipsoidal uncertainty even in the uncorrelated case. All presented hardness results already apply to the evaluation of the leader's objective function.
[ { "created": "Thu, 7 Mar 2019 10:15:54 GMT", "version": "v1" }, { "created": "Fri, 6 Mar 2020 16:09:41 GMT", "version": "v2" }, { "created": "Thu, 29 Jul 2021 12:34:23 GMT", "version": "v3" }, { "created": "Tue, 11 Jan 2022 17:02:47 GMT", "version": "v4" } ]
2022-07-19
[ [ "Buchheim", "Christoph", "" ], [ "Henke", "Dorothee", "" ] ]
We consider a bilevel continuous knapsack problem where the leader controls the capacity of the knapsack and the follower chooses an optimal packing according to his own profits, which may differ from those of the leader. To this bilevel problem, we add uncertainty in a natural way, assuming that the leader does not have full knowledge about the follower's problem. More precisely, adopting the robust optimization approach and assuming that the follower's profits belong to a given uncertainty set, our aim is to compute a solution that optimizes the worst-case follower's reaction from the leader's perspective. By investigating the complexity of this problem with respect to different types of uncertainty sets, we make first steps towards better understanding the combination of bilevel optimization and robust combinatorial optimization. We show that the problem can be solved in polynomial time for both discrete and interval uncertainty, but that the same problem becomes NP-hard when each coefficient can independently assume only a finite number of values. In particular, this demonstrates that replacing uncertainty sets by their convex hulls may change the problem significantly, in contrast to the situation in classical single-level robust optimization. For general polytopal uncertainty, the problem again turns out to be NP-hard, and the same is true for ellipsoidal uncertainty even in the uncorrelated case. All presented hardness results already apply to the evaluation of the leader's objective function.
1911.09565
Cassie Meeker
Cassie Meeker, Maximilian Haas-Heger, Matei Ciocarlie
A Continuous Teleoperation Subspace with Empirical and Algorithmic Mapping Algorithms for Non-Anthropomorphic Hands
14 pages, 6 tables, 8 figures, accepted October 2020 IEEE T-ASE
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Teleoperation is a valuable tool for robotic manipulators in highly unstructured environments. However, finding an intuitive mapping between a human hand and a non-anthropomorphic robot hand can be difficult, due to the hands' dissimilar kinematics. In this paper, we seek to create a mapping between the human hand and a fully actuated, non-anthropomorphic robot hand that is intuitive enough to enable effective real-time teleoperation, even for novice users. To accomplish this, we propose a low-dimensional teleoperation subspace which can be used as an intermediary for mapping between hand pose spaces. We present two different methods to define the teleoperation subspace: an empirical definition, which requires a person to define hand motions in an intuitive, hand-specific way, and an algorithmic definition, which is kinematically independent, and uses objects to define the subspace. We use each of these definitions to create a teleoperation mapping for different hands. One of the main contributions of this paper is the validation of both the empirical and algorithmic mappings with teleoperation experiments controlled by ten novices and performed on two kinematically distinct hands. The experiments show that the proposed subspace is relevant to teleoperation, intuitive enough to enable control by novices, and can generalize to non-anthropomorphic hands with different kinematics.
[ { "created": "Thu, 21 Nov 2019 15:58:26 GMT", "version": "v1" }, { "created": "Wed, 1 Apr 2020 19:45:52 GMT", "version": "v2" }, { "created": "Fri, 10 Jul 2020 19:36:37 GMT", "version": "v3" }, { "created": "Thu, 17 Sep 2020 15:02:15 GMT", "version": "v4" }, { "created": "Thu, 29 Oct 2020 03:53:54 GMT", "version": "v5" } ]
2020-10-30
[ [ "Meeker", "Cassie", "" ], [ "Haas-Heger", "Maximilian", "" ], [ "Ciocarlie", "Matei", "" ] ]
Teleoperation is a valuable tool for robotic manipulators in highly unstructured environments. However, finding an intuitive mapping between a human hand and a non-anthropomorphic robot hand can be difficult, due to the hands' dissimilar kinematics. In this paper, we seek to create a mapping between the human hand and a fully actuated, non-anthropomorphic robot hand that is intuitive enough to enable effective real-time teleoperation, even for novice users. To accomplish this, we propose a low-dimensional teleoperation subspace which can be used as an intermediary for mapping between hand pose spaces. We present two different methods to define the teleoperation subspace: an empirical definition, which requires a person to define hand motions in an intuitive, hand-specific way, and an algorithmic definition, which is kinematically independent, and uses objects to define the subspace. We use each of these definitions to create a teleoperation mapping for different hands. One of the main contributions of this paper is the validation of both the empirical and algorithmic mappings with teleoperation experiments controlled by ten novices and performed on two kinematically distinct hands. The experiments show that the proposed subspace is relevant to teleoperation, intuitive enough to enable control by novices, and can generalize to non-anthropomorphic hands with different kinematics.
1410.5358
Claudio Cusano
Claudio Cusano, Paolo Napoletano, Raimondo Schettini
Remote sensing image classification exploiting multiple kernel learning
Accepted for publication on the IEEE Geoscience and Remote Sensing letters
null
10.1109/LGRS.2015.2476365
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a strategy for land use classification which exploits Multiple Kernel Learning (MKL) to automatically determine a suitable combination of a set of features without requiring any heuristic knowledge about the classification task. We present a novel procedure that allows MKL to achieve good performance in the case of small training sets. Experimental results on publicly available datasets demonstrate the feasibility of the proposed approach.
[ { "created": "Mon, 20 Oct 2014 17:15:50 GMT", "version": "v1" }, { "created": "Fri, 19 Dec 2014 13:17:27 GMT", "version": "v2" }, { "created": "Tue, 1 Sep 2015 09:25:50 GMT", "version": "v3" } ]
2016-11-17
[ [ "Cusano", "Claudio", "" ], [ "Napoletano", "Paolo", "" ], [ "Schettini", "Raimondo", "" ] ]
We propose a strategy for land use classification which exploits Multiple Kernel Learning (MKL) to automatically determine a suitable combination of a set of features without requiring any heuristic knowledge about the classification task. We present a novel procedure that allows MKL to achieve good performance in the case of small training sets. Experimental results on publicly available datasets demonstrate the feasibility of the proposed approach.
2003.03417
Jiaming Zha
Jiaming Zha, Xiangyu Wu, Joseph Kroeger, Natalia Perez and Mark W. Mueller
A collision-resilient aerial vehicle with icosahedron tensegrity structure
null
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Aerial vehicles with collision resilience can operate with more confidence in environments with obstacles that are hard to detect and avoid. This paper presents the methodology used to design a collision resilient aerial vehicle with icosahedron tensegrity structure. A simplified stress analysis of the tensegrity frame under impact forces is performed to guide the selection of its components. In addition, an autonomous controller is presented to reorient the vehicle from an arbitrary orientation on the ground to help it take off. Experiments show that the vehicle can successfully reorient itself after landing upside-down and can survive collisions with speed up to 6.5m/s.
[ { "created": "Fri, 6 Mar 2020 20:14:32 GMT", "version": "v1" } ]
2020-03-10
[ [ "Zha", "Jiaming", "" ], [ "Wu", "Xiangyu", "" ], [ "Kroeger", "Joseph", "" ], [ "Perez", "Natalia", "" ], [ "Mueller", "Mark W.", "" ] ]
Aerial vehicles with collision resilience can operate with more confidence in environments with obstacles that are hard to detect and avoid. This paper presents the methodology used to design a collision resilient aerial vehicle with icosahedron tensegrity structure. A simplified stress analysis of the tensegrity frame under impact forces is performed to guide the selection of its components. In addition, an autonomous controller is presented to reorient the vehicle from an arbitrary orientation on the ground to help it take off. Experiments show that the vehicle can successfully reorient itself after landing upside-down and can survive collisions with speed up to 6.5m/s.
1610.05121
Fang Junhua
Junhua Fang, Rong Zhang, Tom Z.J.Fu, Zhenjie Zhang, Aoying Zhou, Junhua Zhu
Parallel Stream Processing Against Workload Skewness and Variance
null
null
null
null
cs.DC cs.DB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Key-based workload partitioning is a common strategy used in parallel stream processing engines, enabling effective key-value tuple distribution over worker threads in a logical operator. While randomized hashing on the keys is capable of balancing the workload for key-based partitioning when the keys generally follow a static distribution, it is likely to generate poor balancing performance when workload variance occurs on the incoming data stream. This paper presents a new key-based workload partitioning framework, with practical algorithms to support dynamic workload assignment for stateful operators. The framework combines hash-based and explicit key-based routing strategies for workload distribution, which specifies the destination worker threads for a handful of keys and assigns the other keys with the hashing function. When short-term distribution fluctuations occur to the incoming data stream, the system adaptively updates the routing table containing the chosen keys, in order to rebalance the workload with minimal migration overhead within the stateful operator. We formulate the rebalance operation as an optimization problem, with multiple objectives on minimizing state migration costs, controlling the size of the routing table and breaking workload imbalance among worker threads. Despite of the NP-hardness nature behind the optimization formulation, we carefully investigate and justify the heuristics behind key (re)routing and state migration, to facilitate fast response to workload variance with ignorable cost to the normal processing in the distributed system. Empirical studies on synthetic data and real-world stream applications validate the usefulness of our proposals and prove the huge advantage of our approaches over state-of-the-art solutions in the literature.
[ { "created": "Mon, 17 Oct 2016 14:03:41 GMT", "version": "v1" }, { "created": "Tue, 13 Dec 2016 09:04:03 GMT", "version": "v2" } ]
2016-12-14
[ [ "Fang", "Junhua", "" ], [ "Zhang", "Rong", "" ], [ "Fu", "Tom Z. J.", "" ], [ "Zhang", "Zhenjie", "" ], [ "Zhou", "Aoying", "" ], [ "Zhu", "Junhua", "" ] ]
Key-based workload partitioning is a common strategy used in parallel stream processing engines, enabling effective key-value tuple distribution over worker threads in a logical operator. While randomized hashing on the keys is capable of balancing the workload for key-based partitioning when the keys generally follow a static distribution, it is likely to generate poor balancing performance when workload variance occurs on the incoming data stream. This paper presents a new key-based workload partitioning framework, with practical algorithms to support dynamic workload assignment for stateful operators. The framework combines hash-based and explicit key-based routing strategies for workload distribution, which specifies the destination worker threads for a handful of keys and assigns the other keys with the hashing function. When short-term distribution fluctuations occur to the incoming data stream, the system adaptively updates the routing table containing the chosen keys, in order to rebalance the workload with minimal migration overhead within the stateful operator. We formulate the rebalance operation as an optimization problem, with multiple objectives on minimizing state migration costs, controlling the size of the routing table and breaking workload imbalance among worker threads. Despite of the NP-hardness nature behind the optimization formulation, we carefully investigate and justify the heuristics behind key (re)routing and state migration, to facilitate fast response to workload variance with ignorable cost to the normal processing in the distributed system. Empirical studies on synthetic data and real-world stream applications validate the usefulness of our proposals and prove the huge advantage of our approaches over state-of-the-art solutions in the literature.
2402.02499
Pinhao Song
Pinhao Song, Pengteng Li, Erwin Aertbelien, Renaud Detry
Robot Trajectron: Trajectory Prediction-based Shared Control for Robot Manipulation
Accepted by ICRA2024
null
null
null
cs.RO cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We address the problem of (a) predicting the trajectory of an arm reaching motion, based on a few seconds of the motion's onset, and (b) leveraging this predictor to facilitate shared-control manipulation tasks, easing the cognitive load of the operator by assisting them in their anticipated direction of motion. Our novel intent estimator, dubbed the \emph{Robot Trajectron} (RT), produces a probabilistic representation of the robot's anticipated trajectory based on its recent position, velocity and acceleration history. Taking arm dynamics into account allows RT to capture the operator's intent better than other SOTA models that only use the arm's position, making it particularly well-suited to assist in tasks where the operator's intent is susceptible to change. We derive a novel shared-control solution that combines RT's predictive capacity to a representation of the locations of potential reaching targets. Our experiments demonstrate RT's effectiveness in both intent estimation and shared-control tasks. We will make the code and data supporting our experiments publicly available at https://github.com/mousecpn/Robot-Trajectron.git.
[ { "created": "Sun, 4 Feb 2024 14:18:20 GMT", "version": "v1" } ]
2024-02-06
[ [ "Song", "Pinhao", "" ], [ "Li", "Pengteng", "" ], [ "Aertbelien", "Erwin", "" ], [ "Detry", "Renaud", "" ] ]
We address the problem of (a) predicting the trajectory of an arm reaching motion, based on a few seconds of the motion's onset, and (b) leveraging this predictor to facilitate shared-control manipulation tasks, easing the cognitive load of the operator by assisting them in their anticipated direction of motion. Our novel intent estimator, dubbed the \emph{Robot Trajectron} (RT), produces a probabilistic representation of the robot's anticipated trajectory based on its recent position, velocity and acceleration history. Taking arm dynamics into account allows RT to capture the operator's intent better than other SOTA models that only use the arm's position, making it particularly well-suited to assist in tasks where the operator's intent is susceptible to change. We derive a novel shared-control solution that combines RT's predictive capacity to a representation of the locations of potential reaching targets. Our experiments demonstrate RT's effectiveness in both intent estimation and shared-control tasks. We will make the code and data supporting our experiments publicly available at https://github.com/mousecpn/Robot-Trajectron.git.
2312.02659
Sergio Davies PhD
Sergio Davies and Andrew Gait and Andrew Rowley and Alessandro Di Nuovo
Supervised learning of spatial features with STDP and homeostasis using Spiking Neural Networks on SpiNNaker
14 pages, 6 figures (figure 6 has 9 sub-figures) for a total of 14 images, 10 tables, submitted to the Journal of Neural Networks
null
null
null
cs.NE cs.AI
http://creativecommons.org/licenses/by/4.0/
Artificial Neural Networks (ANN) have gained significant popularity thanks to their ability to learn using the well-known backpropagation algorithm. Conversely, Spiking Neural Networks (SNNs), despite having broader capabilities than ANNs, have always posed challenges in the training phase. This paper shows a new method to perform supervised learning on SNNs, using Spike Timing Dependent Plasticity (STDP) and homeostasis, aiming at training the network to identify spatial patterns. Spatial patterns refer to spike patterns without a time component, where all spike events occur simultaneously. The method is tested using the SpiNNaker digital architecture. A SNN is trained to recognise one or multiple patterns and performance metrics are extracted to measure the performance of the network. Some considerations are drawn from the results showing that, in the case of a single trained pattern, the network behaves as the ideal detector, with 100% accuracy in detecting the trained pattern. However, as the number of trained patterns on a single network increases, the accuracy of identification is linked to the similarities between these patterns. This method of training an SNN to detect spatial patterns may be applied to pattern recognition in static images or traffic analysis in computer networks, where each network packet represents a spatial pattern. It will be stipulated that the homeostatic factor may enable the network to detect patterns with some degree of similarity, rather than only perfectly matching patterns.The principles outlined in this article serve as the fundamental building blocks for more complex systems that utilise both spatial and temporal patterns by converting specific features of input signals into spikes.One example of such a system is a computer network packet classifier, tasked with real-time identification of packet streams based on features within the packet content
[ { "created": "Tue, 5 Dec 2023 10:53:31 GMT", "version": "v1" }, { "created": "Mon, 24 Jun 2024 22:15:57 GMT", "version": "v2" } ]
2024-06-26
[ [ "Davies", "Sergio", "" ], [ "Gait", "Andrew", "" ], [ "Rowley", "Andrew", "" ], [ "Di Nuovo", "Alessandro", "" ] ]
Artificial Neural Networks (ANN) have gained significant popularity thanks to their ability to learn using the well-known backpropagation algorithm. Conversely, Spiking Neural Networks (SNNs), despite having broader capabilities than ANNs, have always posed challenges in the training phase. This paper shows a new method to perform supervised learning on SNNs, using Spike Timing Dependent Plasticity (STDP) and homeostasis, aiming at training the network to identify spatial patterns. Spatial patterns refer to spike patterns without a time component, where all spike events occur simultaneously. The method is tested using the SpiNNaker digital architecture. A SNN is trained to recognise one or multiple patterns and performance metrics are extracted to measure the performance of the network. Some considerations are drawn from the results showing that, in the case of a single trained pattern, the network behaves as the ideal detector, with 100% accuracy in detecting the trained pattern. However, as the number of trained patterns on a single network increases, the accuracy of identification is linked to the similarities between these patterns. This method of training an SNN to detect spatial patterns may be applied to pattern recognition in static images or traffic analysis in computer networks, where each network packet represents a spatial pattern. It will be stipulated that the homeostatic factor may enable the network to detect patterns with some degree of similarity, rather than only perfectly matching patterns.The principles outlined in this article serve as the fundamental building blocks for more complex systems that utilise both spatial and temporal patterns by converting specific features of input signals into spikes.One example of such a system is a computer network packet classifier, tasked with real-time identification of packet streams based on features within the packet content
2011.01360
Gerry Chen
Shuo Yang, Gerry Chen, Yetong Zhang, Howie Choset, and Frank Dellaert
Equality Constrained Linear Optimal Control With Factor Graphs
6 pages + references, 8 figures
null
10.1109/ICRA48506.2021.9562000
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents a novel factor graph-based approach to solve the discrete-time finite-horizon Linear Quadratic Regulator problem subject to auxiliary linear equality constraints within and across time steps. We represent such optimal control problems using constrained factor graphs and optimize the factor graphs to obtain the optimal trajectory and the feedback control policies using the variable elimination algorithm with a modified Gram-Schmidt process. We prove that our approach has the same order of computational complexity as the state-of-the-art dynamic programming approach. Furthermore, current dynamic programming approaches can only handle equality constraints between variables at the same time step, but ours can handle equality constraints among any combination of variables at any time step while maintaining linear complexity with respect to trajectory length. Our approach can be used to efficiently generate trajectories and feedback control policies to achieve periodic motion or repetitive manipulation.
[ { "created": "Mon, 2 Nov 2020 22:36:52 GMT", "version": "v1" }, { "created": "Thu, 30 Sep 2021 04:30:15 GMT", "version": "v2" } ]
2021-10-27
[ [ "Yang", "Shuo", "" ], [ "Chen", "Gerry", "" ], [ "Zhang", "Yetong", "" ], [ "Choset", "Howie", "" ], [ "Dellaert", "Frank", "" ] ]
This paper presents a novel factor graph-based approach to solve the discrete-time finite-horizon Linear Quadratic Regulator problem subject to auxiliary linear equality constraints within and across time steps. We represent such optimal control problems using constrained factor graphs and optimize the factor graphs to obtain the optimal trajectory and the feedback control policies using the variable elimination algorithm with a modified Gram-Schmidt process. We prove that our approach has the same order of computational complexity as the state-of-the-art dynamic programming approach. Furthermore, current dynamic programming approaches can only handle equality constraints between variables at the same time step, but ours can handle equality constraints among any combination of variables at any time step while maintaining linear complexity with respect to trajectory length. Our approach can be used to efficiently generate trajectories and feedback control policies to achieve periodic motion or repetitive manipulation.
2403.10066
Ziyu Shan
Ziyu Shan, Yujie Zhang, Qi Yang, Haichen Yang, Yiling Xu, Jenq-Neng Hwang, Xiaozhong Xu and Shan Liu
Contrastive Pre-Training with Multi-View Fusion for No-Reference Point Cloud Quality Assessment
null
null
null
null
cs.CV cs.MM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
No-reference point cloud quality assessment (NR-PCQA) aims to automatically evaluate the perceptual quality of distorted point clouds without available reference, which have achieved tremendous improvements due to the utilization of deep neural networks. However, learning-based NR-PCQA methods suffer from the scarcity of labeled data and usually perform suboptimally in terms of generalization. To solve the problem, we propose a novel contrastive pre-training framework tailored for PCQA (CoPA), which enables the pre-trained model to learn quality-aware representations from unlabeled data. To obtain anchors in the representation space, we project point clouds with different distortions into images and randomly mix their local patches to form mixed images with multiple distortions. Utilizing the generated anchors, we constrain the pre-training process via a quality-aware contrastive loss following the philosophy that perceptual quality is closely related to both content and distortion. Furthermore, in the model fine-tuning stage, we propose a semantic-guided multi-view fusion module to effectively integrate the features of projected images from multiple perspectives. Extensive experiments show that our method outperforms the state-of-the-art PCQA methods on popular benchmarks. Further investigations demonstrate that CoPA can also benefit existing learning-based PCQA models.
[ { "created": "Fri, 15 Mar 2024 07:16:07 GMT", "version": "v1" }, { "created": "Mon, 25 Mar 2024 06:27:57 GMT", "version": "v2" }, { "created": "Wed, 27 Mar 2024 02:25:51 GMT", "version": "v3" } ]
2024-03-28
[ [ "Shan", "Ziyu", "" ], [ "Zhang", "Yujie", "" ], [ "Yang", "Qi", "" ], [ "Yang", "Haichen", "" ], [ "Xu", "Yiling", "" ], [ "Hwang", "Jenq-Neng", "" ], [ "Xu", "Xiaozhong", "" ], [ "Liu", "Shan", "" ] ]
No-reference point cloud quality assessment (NR-PCQA) aims to automatically evaluate the perceptual quality of distorted point clouds without available reference, which have achieved tremendous improvements due to the utilization of deep neural networks. However, learning-based NR-PCQA methods suffer from the scarcity of labeled data and usually perform suboptimally in terms of generalization. To solve the problem, we propose a novel contrastive pre-training framework tailored for PCQA (CoPA), which enables the pre-trained model to learn quality-aware representations from unlabeled data. To obtain anchors in the representation space, we project point clouds with different distortions into images and randomly mix their local patches to form mixed images with multiple distortions. Utilizing the generated anchors, we constrain the pre-training process via a quality-aware contrastive loss following the philosophy that perceptual quality is closely related to both content and distortion. Furthermore, in the model fine-tuning stage, we propose a semantic-guided multi-view fusion module to effectively integrate the features of projected images from multiple perspectives. Extensive experiments show that our method outperforms the state-of-the-art PCQA methods on popular benchmarks. Further investigations demonstrate that CoPA can also benefit existing learning-based PCQA models.
2001.11062
Olivia Brown
Stephen Mell, Olivia Brown, Justin Goodwin, Sung-Hyun Son
Safe Predictors for Enforcing Input-Output Specifications
10 pages, 5 figures, paper accepted to the NeurIPS 2019 Workshop on Machine Learning with Guarantees and the NeurIPS 2019 Workshop on Safety and Robustness in Decision Making
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present an approach for designing correct-by-construction neural networks (and other machine learning models) that are guaranteed to be consistent with a collection of input-output specifications before, during, and after algorithm training. Our method involves designing a constrained predictor for each set of compatible constraints, and combining them safely via a convex combination of their predictions. We demonstrate our approach on synthetic datasets and an aircraft collision avoidance problem.
[ { "created": "Wed, 29 Jan 2020 19:39:22 GMT", "version": "v1" } ]
2020-01-31
[ [ "Mell", "Stephen", "" ], [ "Brown", "Olivia", "" ], [ "Goodwin", "Justin", "" ], [ "Son", "Sung-Hyun", "" ] ]
We present an approach for designing correct-by-construction neural networks (and other machine learning models) that are guaranteed to be consistent with a collection of input-output specifications before, during, and after algorithm training. Our method involves designing a constrained predictor for each set of compatible constraints, and combining them safely via a convex combination of their predictions. We demonstrate our approach on synthetic datasets and an aircraft collision avoidance problem.
1404.5507
Shun Watanabe
Shun Watanabe and Masahito Hayashi
Strong Converse and Second-Order Asymptotics of Channel Resolvability
7 pages, a shorter version will appear in ISIT 2014, this version includes the proofs of technical lemmas in appendices
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the problem of channel resolvability for fixed i.i.d. input distributions and discrete memoryless channels (DMCs), and derive the strong converse theorem for any DMCs that are not necessarily full rank. We also derive the optimal second-order rate under a condition. Furthermore, under the condition that a DMC has the unique capacity achieving input distribution, we derive the optimal second-order rate of channel resolvability for the worst input distribution.
[ { "created": "Tue, 22 Apr 2014 14:10:34 GMT", "version": "v1" } ]
2014-04-23
[ [ "Watanabe", "Shun", "" ], [ "Hayashi", "Masahito", "" ] ]
We study the problem of channel resolvability for fixed i.i.d. input distributions and discrete memoryless channels (DMCs), and derive the strong converse theorem for any DMCs that are not necessarily full rank. We also derive the optimal second-order rate under a condition. Furthermore, under the condition that a DMC has the unique capacity achieving input distribution, we derive the optimal second-order rate of channel resolvability for the worst input distribution.
2001.09782
Tram Truong-Huu
Tien-Dung Cao, Tram Truong-Huu, Hien Tran, and Khanh Tran
A Federated Deep Learning Framework for Privacy Preservation and Communication Efficiency
null
Journal of Systems Architecture, vol. 124, March 2022
10.1016/j.sysarc.2022.102413
null
cs.DC cs.CR cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep learning has achieved great success in many applications. However, its deployment in practice has been hurdled by two issues: the privacy of data that has to be aggregated centrally for model training and high communication overhead due to transmission of a large amount of data usually geographically distributed. Addressing both issues is challenging and most existing works could not provide an efficient solution. In this paper, we develop FedPC, a Federated Deep Learning Framework for Privacy Preservation and Communication Efficiency. The framework allows a model to be learned on multiple private datasets while not revealing any information of training data, even with intermediate data. The framework also minimizes the amount of data exchanged to update the model. We formally prove the convergence of the learning model when training with FedPC and its privacy-preserving property. We perform extensive experiments to evaluate the performance of FedPC in terms of the approximation to the upper-bound performance (when training centrally) and communication overhead. The results show that FedPC maintains the performance approximation of the models within $8.5\%$ of the centrally-trained models when data is distributed to 10 computing nodes. FedPC also reduces the communication overhead by up to $42.20\%$ compared to existing works.
[ { "created": "Wed, 22 Jan 2020 02:52:31 GMT", "version": "v1" }, { "created": "Fri, 29 May 2020 14:39:14 GMT", "version": "v2" }, { "created": "Wed, 5 Jan 2022 05:05:42 GMT", "version": "v3" } ]
2022-02-04
[ [ "Cao", "Tien-Dung", "" ], [ "Truong-Huu", "Tram", "" ], [ "Tran", "Hien", "" ], [ "Tran", "Khanh", "" ] ]
Deep learning has achieved great success in many applications. However, its deployment in practice has been hurdled by two issues: the privacy of data that has to be aggregated centrally for model training and high communication overhead due to transmission of a large amount of data usually geographically distributed. Addressing both issues is challenging and most existing works could not provide an efficient solution. In this paper, we develop FedPC, a Federated Deep Learning Framework for Privacy Preservation and Communication Efficiency. The framework allows a model to be learned on multiple private datasets while not revealing any information of training data, even with intermediate data. The framework also minimizes the amount of data exchanged to update the model. We formally prove the convergence of the learning model when training with FedPC and its privacy-preserving property. We perform extensive experiments to evaluate the performance of FedPC in terms of the approximation to the upper-bound performance (when training centrally) and communication overhead. The results show that FedPC maintains the performance approximation of the models within $8.5\%$ of the centrally-trained models when data is distributed to 10 computing nodes. FedPC also reduces the communication overhead by up to $42.20\%$ compared to existing works.
2307.04816
Huixin Sun
Mingze Wang, Huixin Sun, Jun Shi, Xuhui Liu, Baochang Zhang, Xianbin Cao
Q-YOLO: Efficient Inference for Real-time Object Detection
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Real-time object detection plays a vital role in various computer vision applications. However, deploying real-time object detectors on resource-constrained platforms poses challenges due to high computational and memory requirements. This paper describes a low-bit quantization method to build a highly efficient one-stage detector, dubbed as Q-YOLO, which can effectively address the performance degradation problem caused by activation distribution imbalance in traditional quantized YOLO models. Q-YOLO introduces a fully end-to-end Post-Training Quantization (PTQ) pipeline with a well-designed Unilateral Histogram-based (UH) activation quantization scheme, which determines the maximum truncation values through histogram analysis by minimizing the Mean Squared Error (MSE) quantization errors. Extensive experiments on the COCO dataset demonstrate the effectiveness of Q-YOLO, outperforming other PTQ methods while achieving a more favorable balance between accuracy and computational cost. This research contributes to advancing the efficient deployment of object detection models on resource-limited edge devices, enabling real-time detection with reduced computational and memory overhead.
[ { "created": "Sat, 1 Jul 2023 03:50:32 GMT", "version": "v1" } ]
2023-07-12
[ [ "Wang", "Mingze", "" ], [ "Sun", "Huixin", "" ], [ "Shi", "Jun", "" ], [ "Liu", "Xuhui", "" ], [ "Zhang", "Baochang", "" ], [ "Cao", "Xianbin", "" ] ]
Real-time object detection plays a vital role in various computer vision applications. However, deploying real-time object detectors on resource-constrained platforms poses challenges due to high computational and memory requirements. This paper describes a low-bit quantization method to build a highly efficient one-stage detector, dubbed as Q-YOLO, which can effectively address the performance degradation problem caused by activation distribution imbalance in traditional quantized YOLO models. Q-YOLO introduces a fully end-to-end Post-Training Quantization (PTQ) pipeline with a well-designed Unilateral Histogram-based (UH) activation quantization scheme, which determines the maximum truncation values through histogram analysis by minimizing the Mean Squared Error (MSE) quantization errors. Extensive experiments on the COCO dataset demonstrate the effectiveness of Q-YOLO, outperforming other PTQ methods while achieving a more favorable balance between accuracy and computational cost. This research contributes to advancing the efficient deployment of object detection models on resource-limited edge devices, enabling real-time detection with reduced computational and memory overhead.
2308.10427
Rongfei Fan
Shiyuan Zuo, Rongfei Fan, Han Hu, Ning Zhang, and Shimin Gong
Federated Learning Robust to Byzantine Attacks: Achieving Zero Optimality Gap
null
null
null
null
cs.LG cs.CR cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we propose a robust aggregation method for federated learning (FL) that can effectively tackle malicious Byzantine attacks. At each user, model parameter is firstly updated by multiple steps, which is adjustable over iterations, and then pushed to the aggregation center directly. This decreases the number of interactions between the aggregation center and users, allows each user to set training parameter in a flexible way, and reduces computation burden compared with existing works that need to combine multiple historical model parameters. At the aggregation center, geometric median is leveraged to combine the received model parameters from each user. Rigorous proof shows that zero optimality gap is achieved by our proposed method with linear convergence, as long as the fraction of Byzantine attackers is below half. Numerical results verify the effectiveness of our proposed method.
[ { "created": "Mon, 21 Aug 2023 02:43:38 GMT", "version": "v1" } ]
2023-08-22
[ [ "Zuo", "Shiyuan", "" ], [ "Fan", "Rongfei", "" ], [ "Hu", "Han", "" ], [ "Zhang", "Ning", "" ], [ "Gong", "Shimin", "" ] ]
In this paper, we propose a robust aggregation method for federated learning (FL) that can effectively tackle malicious Byzantine attacks. At each user, model parameter is firstly updated by multiple steps, which is adjustable over iterations, and then pushed to the aggregation center directly. This decreases the number of interactions between the aggregation center and users, allows each user to set training parameter in a flexible way, and reduces computation burden compared with existing works that need to combine multiple historical model parameters. At the aggregation center, geometric median is leveraged to combine the received model parameters from each user. Rigorous proof shows that zero optimality gap is achieved by our proposed method with linear convergence, as long as the fraction of Byzantine attackers is below half. Numerical results verify the effectiveness of our proposed method.
1210.3598
Jaume Barcelo
Jaume Barcelo, Nuria Garcia, Azadeh Faridi, Simon Oechsner and Boris Bellalta
Modelling a Decentralized Constraint Satisfaction Solver for Collision-Free Channel Access
null
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, the problem of assigning channel slots to a number of contending stations is modeled as a Constraint Satisfaction Problem (CSP). A learning MAC protocol that uses deterministic backoffs after successful transmissions is used as a decentralized solver for the CSP. The convergence process of the solver is modeled by an absorbing Markov chain (MC), and analytical, closed-form expressions for its transition probabilities are derived. Using these, the expected number of steps required to reach a solution is found. The analysis is validated by means of simulations and the model is extended to account for the presence of channel errors. The results are applicable in various resource allocation scenarios in wireless networks.
[ { "created": "Fri, 12 Oct 2012 18:49:38 GMT", "version": "v1" } ]
2012-10-15
[ [ "Barcelo", "Jaume", "" ], [ "Garcia", "Nuria", "" ], [ "Faridi", "Azadeh", "" ], [ "Oechsner", "Simon", "" ], [ "Bellalta", "Boris", "" ] ]
In this paper, the problem of assigning channel slots to a number of contending stations is modeled as a Constraint Satisfaction Problem (CSP). A learning MAC protocol that uses deterministic backoffs after successful transmissions is used as a decentralized solver for the CSP. The convergence process of the solver is modeled by an absorbing Markov chain (MC), and analytical, closed-form expressions for its transition probabilities are derived. Using these, the expected number of steps required to reach a solution is found. The analysis is validated by means of simulations and the model is extended to account for the presence of channel errors. The results are applicable in various resource allocation scenarios in wireless networks.
0807.3593
Chandra Nair
Chandra Nair
An outer bound for 2-receiver discrete memoryless broadcast channels
3 pages, a note
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
An outer bound to the two-receiver discrete memoryless broadcast channel is presented. We compare it to the known outer bounds and show that the outer bound presented is at least as tight as the existing bounds.
[ { "created": "Wed, 23 Jul 2008 03:43:30 GMT", "version": "v1" } ]
2008-07-24
[ [ "Nair", "Chandra", "" ] ]
An outer bound to the two-receiver discrete memoryless broadcast channel is presented. We compare it to the known outer bounds and show that the outer bound presented is at least as tight as the existing bounds.
2302.10688
Tianyu Pang
Tianyu Pang, Cheng Lu, Chao Du, Min Lin, Shuicheng Yan, Zhijie Deng
On Calibrating Diffusion Probabilistic Models
NeurIPS 2023
null
null
null
cs.LG cs.CV stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recently, diffusion probabilistic models (DPMs) have achieved promising results in diverse generative tasks. A typical DPM framework includes a forward process that gradually diffuses the data distribution and a reverse process that recovers the data distribution from time-dependent data scores. In this work, we observe that the stochastic reverse process of data scores is a martingale, from which concentration bounds and the optional stopping theorem for data scores can be derived. Then, we discover a simple way for calibrating an arbitrary pretrained DPM, with which the score matching loss can be reduced and the lower bounds of model likelihood can consequently be increased. We provide general calibration guidelines under various model parametrizations. Our calibration method is performed only once and the resulting models can be used repeatedly for sampling. We conduct experiments on multiple datasets to empirically validate our proposal. Our code is at https://github.com/thudzj/Calibrated-DPMs.
[ { "created": "Tue, 21 Feb 2023 14:14:40 GMT", "version": "v1" }, { "created": "Fri, 26 May 2023 12:55:29 GMT", "version": "v2" }, { "created": "Sun, 29 Oct 2023 12:14:29 GMT", "version": "v3" } ]
2023-10-31
[ [ "Pang", "Tianyu", "" ], [ "Lu", "Cheng", "" ], [ "Du", "Chao", "" ], [ "Lin", "Min", "" ], [ "Yan", "Shuicheng", "" ], [ "Deng", "Zhijie", "" ] ]
Recently, diffusion probabilistic models (DPMs) have achieved promising results in diverse generative tasks. A typical DPM framework includes a forward process that gradually diffuses the data distribution and a reverse process that recovers the data distribution from time-dependent data scores. In this work, we observe that the stochastic reverse process of data scores is a martingale, from which concentration bounds and the optional stopping theorem for data scores can be derived. Then, we discover a simple way for calibrating an arbitrary pretrained DPM, with which the score matching loss can be reduced and the lower bounds of model likelihood can consequently be increased. We provide general calibration guidelines under various model parametrizations. Our calibration method is performed only once and the resulting models can be used repeatedly for sampling. We conduct experiments on multiple datasets to empirically validate our proposal. Our code is at https://github.com/thudzj/Calibrated-DPMs.
2003.04359
Chaoyue Niu
Chaoyue Niu, Danesh Tarapore and Klaus-Peter Zauner
Low-viewpoint forest depth dataset for sparse rover swarms
This paper has been accepted to IROS 2020 for oral presentation
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Rapid progress in embedded computing hardware increasingly enables on-board image processing on small robots. This development opens the path to replacing costly sensors with sophisticated computer vision techniques. A case in point is the prediction of scene depth information from a monocular camera for autonomous navigation. Motivated by the aim to develop a robot swarm suitable for sensing, monitoring, and search applications in forests, we have collected a set of RGB images and corresponding depth maps. Over 100k images were recorded with a custom rig from the perspective of a small ground rover moving through a forest. Taken under different weather and lighting conditions, the images include scenes with grass, bushes, standing and fallen trees, tree branches, leafs, and dirt. In addition GPS, IMU, and wheel encoder data was recorded. From the calibrated, synchronized, aligned and timestamped frames about 9700 image-depth map pairs were selected for sharpness and variety. We provide this dataset to the community to fill a need identified in our own research and hope it will accelerate progress in robots navigating the challenging forest environment. This paper describes our custom hardware and methodology to collect the data, subsequent processing and quality of the data, and how to access it.
[ { "created": "Mon, 9 Mar 2020 18:53:24 GMT", "version": "v1" }, { "created": "Mon, 26 Oct 2020 14:57:34 GMT", "version": "v2" } ]
2020-10-27
[ [ "Niu", "Chaoyue", "" ], [ "Tarapore", "Danesh", "" ], [ "Zauner", "Klaus-Peter", "" ] ]
Rapid progress in embedded computing hardware increasingly enables on-board image processing on small robots. This development opens the path to replacing costly sensors with sophisticated computer vision techniques. A case in point is the prediction of scene depth information from a monocular camera for autonomous navigation. Motivated by the aim to develop a robot swarm suitable for sensing, monitoring, and search applications in forests, we have collected a set of RGB images and corresponding depth maps. Over 100k images were recorded with a custom rig from the perspective of a small ground rover moving through a forest. Taken under different weather and lighting conditions, the images include scenes with grass, bushes, standing and fallen trees, tree branches, leafs, and dirt. In addition GPS, IMU, and wheel encoder data was recorded. From the calibrated, synchronized, aligned and timestamped frames about 9700 image-depth map pairs were selected for sharpness and variety. We provide this dataset to the community to fill a need identified in our own research and hope it will accelerate progress in robots navigating the challenging forest environment. This paper describes our custom hardware and methodology to collect the data, subsequent processing and quality of the data, and how to access it.
2204.03516
Guillaume Sartoretti
Yutong Wang and Mehul Damani and Pamela Wang and Yuhong Cao and Guillaume Sartoretti
Distributed Reinforcement Learning for Robot Teams: A Review
Preprint of the paper submitted to Springer's Current Robotics Reports
null
null
null
cs.RO cs.AI cs.LG cs.MA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Purpose of review: Recent advances in sensing, actuation, and computation have opened the door to multi-robot systems consisting of hundreds/thousands of robots, with promising applications to automated manufacturing, disaster relief, harvesting, last-mile delivery, port/airport operations, or search and rescue. The community has leveraged model-free multi-agent reinforcement learning (MARL) to devise efficient, scalable controllers for multi-robot systems (MRS). This review aims to provide an analysis of the state-of-the-art in distributed MARL for multi-robot cooperation. Recent findings: Decentralized MRS face fundamental challenges, such as non-stationarity and partial observability. Building upon the "centralized training, decentralized execution" paradigm, recent MARL approaches include independent learning, centralized critic, value decomposition, and communication learning approaches. Cooperative behaviors are demonstrated through AI benchmarks and fundamental real-world robotic capabilities such as multi-robot motion/path planning. Summary: This survey reports the challenges surrounding decentralized model-free MARL for multi-robot cooperation and existing classes of approaches. We present benchmarks and robotic applications along with a discussion on current open avenues for research.
[ { "created": "Thu, 7 Apr 2022 15:34:19 GMT", "version": "v1" } ]
2022-04-08
[ [ "Wang", "Yutong", "" ], [ "Damani", "Mehul", "" ], [ "Wang", "Pamela", "" ], [ "Cao", "Yuhong", "" ], [ "Sartoretti", "Guillaume", "" ] ]
Purpose of review: Recent advances in sensing, actuation, and computation have opened the door to multi-robot systems consisting of hundreds/thousands of robots, with promising applications to automated manufacturing, disaster relief, harvesting, last-mile delivery, port/airport operations, or search and rescue. The community has leveraged model-free multi-agent reinforcement learning (MARL) to devise efficient, scalable controllers for multi-robot systems (MRS). This review aims to provide an analysis of the state-of-the-art in distributed MARL for multi-robot cooperation. Recent findings: Decentralized MRS face fundamental challenges, such as non-stationarity and partial observability. Building upon the "centralized training, decentralized execution" paradigm, recent MARL approaches include independent learning, centralized critic, value decomposition, and communication learning approaches. Cooperative behaviors are demonstrated through AI benchmarks and fundamental real-world robotic capabilities such as multi-robot motion/path planning. Summary: This survey reports the challenges surrounding decentralized model-free MARL for multi-robot cooperation and existing classes of approaches. We present benchmarks and robotic applications along with a discussion on current open avenues for research.
2012.01096
Liu Liu
Liu Liu, Hongdong Li, Haodong Yao and Ruyi Zha
PlueckerNet: Learn to Register 3D Line Reconstructions
12 pages
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Aligning two partially-overlapped 3D line reconstructions in Euclidean space is challenging, as we need to simultaneously solve correspondences and relative pose between line reconstructions. This paper proposes a neural network based method and it has three modules connected in sequence: (i) a Multilayer Perceptron (MLP) based network takes Pluecker representations of lines as inputs, to extract discriminative line-wise features and matchabilities (how likely each line is going to have a match), (ii) an Optimal Transport (OT) layer takes two-view line-wise features and matchabilities as inputs to estimate a 2D joint probability matrix, with each item describes the matchness of a line pair, and (iii) line pairs with Top-K matching probabilities are fed to a 2-line minimal solver in a RANSAC framework to estimate a six Degree-of-Freedom (6-DoF) rigid transformation. Experiments on both indoor and outdoor datasets show that the registration (rotation and translation) precision of our method outperforms baselines significantly.
[ { "created": "Wed, 2 Dec 2020 11:31:56 GMT", "version": "v1" } ]
2020-12-03
[ [ "Liu", "Liu", "" ], [ "Li", "Hongdong", "" ], [ "Yao", "Haodong", "" ], [ "Zha", "Ruyi", "" ] ]
Aligning two partially-overlapped 3D line reconstructions in Euclidean space is challenging, as we need to simultaneously solve correspondences and relative pose between line reconstructions. This paper proposes a neural network based method and it has three modules connected in sequence: (i) a Multilayer Perceptron (MLP) based network takes Pluecker representations of lines as inputs, to extract discriminative line-wise features and matchabilities (how likely each line is going to have a match), (ii) an Optimal Transport (OT) layer takes two-view line-wise features and matchabilities as inputs to estimate a 2D joint probability matrix, with each item describes the matchness of a line pair, and (iii) line pairs with Top-K matching probabilities are fed to a 2-line minimal solver in a RANSAC framework to estimate a six Degree-of-Freedom (6-DoF) rigid transformation. Experiments on both indoor and outdoor datasets show that the registration (rotation and translation) precision of our method outperforms baselines significantly.
2110.14432
Weiyang Liu
Weiyang Liu, Zhen Liu, Hanchen Wang, Liam Paull, Bernhard Sch\"olkopf, Adrian Weller
Iterative Teaching by Label Synthesis
NeurIPS 2021 Spotlight (v5: 28 pages, 20 figures, fixed typos in v4)
null
null
null
cs.LG cs.AI cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we consider the problem of iterative machine teaching, where a teacher provides examples sequentially based on the current iterative learner. In contrast to previous methods that have to scan over the entire pool and select teaching examples from it in each iteration, we propose a label synthesis teaching framework where the teacher randomly selects input teaching examples (e.g., images) and then synthesizes suitable outputs (e.g., labels) for them. We show that this framework can avoid costly example selection while still provably achieving exponential teachability. We propose multiple novel teaching algorithms in this framework. Finally, we empirically demonstrate the value of our framework.
[ { "created": "Wed, 27 Oct 2021 13:45:29 GMT", "version": "v1" }, { "created": "Wed, 10 Nov 2021 04:06:59 GMT", "version": "v2" }, { "created": "Mon, 25 Jul 2022 20:43:59 GMT", "version": "v3" }, { "created": "Sun, 11 Sep 2022 20:36:24 GMT", "version": "v4" }, { "created": "Thu, 26 Jan 2023 16:18:30 GMT", "version": "v5" } ]
2023-01-27
[ [ "Liu", "Weiyang", "" ], [ "Liu", "Zhen", "" ], [ "Wang", "Hanchen", "" ], [ "Paull", "Liam", "" ], [ "Schölkopf", "Bernhard", "" ], [ "Weller", "Adrian", "" ] ]
In this paper, we consider the problem of iterative machine teaching, where a teacher provides examples sequentially based on the current iterative learner. In contrast to previous methods that have to scan over the entire pool and select teaching examples from it in each iteration, we propose a label synthesis teaching framework where the teacher randomly selects input teaching examples (e.g., images) and then synthesizes suitable outputs (e.g., labels) for them. We show that this framework can avoid costly example selection while still provably achieving exponential teachability. We propose multiple novel teaching algorithms in this framework. Finally, we empirically demonstrate the value of our framework.
1602.00812
Richard Moot
Richard Moot (LaBRI, CNRS)
The Grail theorem prover: Type theory for syntax and semantics
null
Modern Perspectives in Type Theoretical Semantics, Springer, 2016
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
As the name suggests, type-logical grammars are a grammar formalism based on logic and type theory. From the prespective of grammar design, type-logical grammars develop the syntactic and semantic aspects of linguistic phenomena hand-in-hand, letting the desired semantics of an expression inform the syntactic type and vice versa. Prototypical examples of the successful application of type-logical grammars to the syntax-semantics interface include coordination, quantifier scope and extraction.This chapter describes the Grail theorem prover, a series of tools for designing and testing grammars in various modern type-logical grammars which functions as a tool . All tools described in this chapter are freely available.
[ { "created": "Tue, 2 Feb 2016 07:35:02 GMT", "version": "v1" }, { "created": "Fri, 26 Aug 2016 07:04:29 GMT", "version": "v2" } ]
2016-08-29
[ [ "Moot", "Richard", "", "LaBRI, CNRS" ] ]
As the name suggests, type-logical grammars are a grammar formalism based on logic and type theory. From the prespective of grammar design, type-logical grammars develop the syntactic and semantic aspects of linguistic phenomena hand-in-hand, letting the desired semantics of an expression inform the syntactic type and vice versa. Prototypical examples of the successful application of type-logical grammars to the syntax-semantics interface include coordination, quantifier scope and extraction.This chapter describes the Grail theorem prover, a series of tools for designing and testing grammars in various modern type-logical grammars which functions as a tool . All tools described in this chapter are freely available.