id
stringlengths
9
10
submitter
stringlengths
1
64
authors
stringlengths
4
20.7k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
404
doi
stringlengths
11
153
report-no
stringlengths
2
254
categories
stringlengths
5
98
license
stringclasses
9 values
orig_abstract
stringlengths
14
3.35k
versions
listlengths
1
60
update_date
stringlengths
10
10
authors_parsed
listlengths
1
1.35k
abstract
stringlengths
11
3.34k
1610.00243
Elad Hoffer
Elad Hoffer, Itay Hubara, Nir Ailon
Deep unsupervised learning through spatial contrasting
null
null
null
null
cs.LG cs.AI stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Convolutional networks have marked their place over the last few years as the best performing model for various visual tasks. They are, however, most suited for supervised learning from large amounts of labeled data. Previous attempts have been made to use unlabeled data to improve model performance by applying unsupervised techniques. These attempts require different architectures and training methods. In this work we present a novel approach for unsupervised training of Convolutional networks that is based on contrasting between spatial regions within images. This criterion can be employed within conventional neural networks and trained using standard techniques such as SGD and back-propagation, thus complementing supervised methods.
[ { "created": "Sun, 2 Oct 2016 08:42:59 GMT", "version": "v1" }, { "created": "Tue, 4 Dec 2018 15:38:31 GMT", "version": "v2" } ]
2018-12-05
[ [ "Hoffer", "Elad", "" ], [ "Hubara", "Itay", "" ], [ "Ailon", "Nir", "" ] ]
Convolutional networks have marked their place over the last few years as the best performing model for various visual tasks. They are, however, most suited for supervised learning from large amounts of labeled data. Previous attempts have been made to use unlabeled data to improve model performance by applying unsupervised techniques. These attempts require different architectures and training methods. In this work we present a novel approach for unsupervised training of Convolutional networks that is based on contrasting between spatial regions within images. This criterion can be employed within conventional neural networks and trained using standard techniques such as SGD and back-propagation, thus complementing supervised methods.
1905.01734
Marcus Scheunemann
Marcus M. Scheunemann and Christoph Salge and Kerstin Dautenhahn
Intrinsically Motivated Autonomy in Human-Robot Interaction: Human Perception of Predictive Information in Robots
12 pages, 1 figure, 1 table, Towards Autonomous Robotic Systems (TAROS), 2019
null
10.1007/978-3-030-23807-0_27
null
cs.HC cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we present a fully autonomous and intrinsically motivated robot usable for HRI experiments. We argue that an intrinsically motivated approach based on the Predictive Information formalism, like the one presented here, could provide us with a pathway towards autonomous robot behaviour generation, that is capable of producing behaviour interesting enough for sustaining the interaction with humans and without the need for a human operator in the loop. We present a possible reactive baseline behaviour for comparison for future research. Participants perceive the baseline and the adaptive, intrinsically motivated behaviour differently. In our exploratory study we see evidence that participants perceive an intrinsically motivated robot as less intelligent than the reactive baseline behaviour. We argue that is mostly due to the high adaptation rate chosen and the design of the environment. However, we also see that the adaptive robot is perceived as more warm, a factor which carries more weight in interpersonal interaction than competence.
[ { "created": "Sun, 5 May 2019 19:01:24 GMT", "version": "v1" } ]
2019-07-19
[ [ "Scheunemann", "Marcus M.", "" ], [ "Salge", "Christoph", "" ], [ "Dautenhahn", "Kerstin", "" ] ]
In this paper we present a fully autonomous and intrinsically motivated robot usable for HRI experiments. We argue that an intrinsically motivated approach based on the Predictive Information formalism, like the one presented here, could provide us with a pathway towards autonomous robot behaviour generation, that is capable of producing behaviour interesting enough for sustaining the interaction with humans and without the need for a human operator in the loop. We present a possible reactive baseline behaviour for comparison for future research. Participants perceive the baseline and the adaptive, intrinsically motivated behaviour differently. In our exploratory study we see evidence that participants perceive an intrinsically motivated robot as less intelligent than the reactive baseline behaviour. We argue that is mostly due to the high adaptation rate chosen and the design of the environment. However, we also see that the adaptive robot is perceived as more warm, a factor which carries more weight in interpersonal interaction than competence.
2102.13351
Micha Sende
Micha Sende, Melanie Schranz, Gianluca Prato, Etienne Brosse, Omar Morando, Martina Umlauft
Engineering Swarms of Cyber-Physical Systems with the CPSwarm Workbench
null
null
10.1007/s10846-021-01430-1
null
cs.MA
http://creativecommons.org/licenses/by-sa/4.0/
Engineering swarms of cyber-physical systems (CPSs) is a complex process. We present the CPSwarm workbench that creates an automated design workflow to ease this process. This formalized workflow guides the user from modeling, to code generation, to deployment, both in simulation and on CPS hardware platforms. The workbench combines existing and emerging tools to solve real-world CPS swarm problems. As a proof-of-concept, we use the workbench to design a swarm of unmanned aerial vehicles (UAVs) and unmanned ground vehicles (UGVs) for a search and rescue (SAR) use case. We evaluate the resulting swarm behaviors on three levels. First, abstract simulations for rapid prototyping. Second, detailed simulation to test the correctness of the results. Third, deployment on hardware to demonstrate the applicability. We measure the swarm performance in terms of area covered and victims rescued. The results show that the performance of the swarm is proportional to its size. Despite some manual steps, the proposed workbench shows to be well suited to ease the complicated task of deploying a swarm of CPSs.
[ { "created": "Fri, 26 Feb 2021 08:01:08 GMT", "version": "v1" } ]
2021-09-10
[ [ "Sende", "Micha", "" ], [ "Schranz", "Melanie", "" ], [ "Prato", "Gianluca", "" ], [ "Brosse", "Etienne", "" ], [ "Morando", "Omar", "" ], [ "Umlauft", "Martina", "" ] ]
Engineering swarms of cyber-physical systems (CPSs) is a complex process. We present the CPSwarm workbench that creates an automated design workflow to ease this process. This formalized workflow guides the user from modeling, to code generation, to deployment, both in simulation and on CPS hardware platforms. The workbench combines existing and emerging tools to solve real-world CPS swarm problems. As a proof-of-concept, we use the workbench to design a swarm of unmanned aerial vehicles (UAVs) and unmanned ground vehicles (UGVs) for a search and rescue (SAR) use case. We evaluate the resulting swarm behaviors on three levels. First, abstract simulations for rapid prototyping. Second, detailed simulation to test the correctness of the results. Third, deployment on hardware to demonstrate the applicability. We measure the swarm performance in terms of area covered and victims rescued. The results show that the performance of the swarm is proportional to its size. Despite some manual steps, the proposed workbench shows to be well suited to ease the complicated task of deploying a swarm of CPSs.
2303.07347
Dingfeng Shi
Dingfeng Shi, Yujie Zhong, Qiong Cao, Lin Ma, Jia Li, Dacheng Tao
TriDet: Temporal Action Detection with Relative Boundary Modeling
CVPR2023; Temporal Action Detection; Temporal Action Localization
null
null
null
cs.CV cs.AI cs.MM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we present a one-stage framework TriDet for temporal action detection. Existing methods often suffer from imprecise boundary predictions due to the ambiguous action boundaries in videos. To alleviate this problem, we propose a novel Trident-head to model the action boundary via an estimated relative probability distribution around the boundary. In the feature pyramid of TriDet, we propose an efficient Scalable-Granularity Perception (SGP) layer to mitigate the rank loss problem of self-attention that takes place in the video features and aggregate information across different temporal granularities. Benefiting from the Trident-head and the SGP-based feature pyramid, TriDet achieves state-of-the-art performance on three challenging benchmarks: THUMOS14, HACS and EPIC-KITCHEN 100, with lower computational costs, compared to previous methods. For example, TriDet hits an average mAP of $69.3\%$ on THUMOS14, outperforming the previous best by $2.5\%$, but with only $74.6\%$ of its latency. The code is released to https://github.com/sssste/TriDet.
[ { "created": "Mon, 13 Mar 2023 17:59:59 GMT", "version": "v1" }, { "created": "Thu, 16 Mar 2023 11:26:39 GMT", "version": "v2" } ]
2023-03-17
[ [ "Shi", "Dingfeng", "" ], [ "Zhong", "Yujie", "" ], [ "Cao", "Qiong", "" ], [ "Ma", "Lin", "" ], [ "Li", "Jia", "" ], [ "Tao", "Dacheng", "" ] ]
In this paper, we present a one-stage framework TriDet for temporal action detection. Existing methods often suffer from imprecise boundary predictions due to the ambiguous action boundaries in videos. To alleviate this problem, we propose a novel Trident-head to model the action boundary via an estimated relative probability distribution around the boundary. In the feature pyramid of TriDet, we propose an efficient Scalable-Granularity Perception (SGP) layer to mitigate the rank loss problem of self-attention that takes place in the video features and aggregate information across different temporal granularities. Benefiting from the Trident-head and the SGP-based feature pyramid, TriDet achieves state-of-the-art performance on three challenging benchmarks: THUMOS14, HACS and EPIC-KITCHEN 100, with lower computational costs, compared to previous methods. For example, TriDet hits an average mAP of $69.3\%$ on THUMOS14, outperforming the previous best by $2.5\%$, but with only $74.6\%$ of its latency. The code is released to https://github.com/sssste/TriDet.
1512.01568
Sanjay Sahay
Aruna Govada, Pravin Joshi, Sahil Mittal and Sanjay K Sahay
Hybrid Approach for Inductive Semi Supervised Learning using Label Propagation and Support Vector Machine
Presented in the 11th International Conference, MLDM, Germany, July 20 - 21, 2015. Springer, Machine Learning and Data Mining in Pattern Recognition, LNAI Vol. 9166, p. 199-213, 2015
null
10.1007/978-3-319-21024-7_14
null
cs.LG cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Semi supervised learning methods have gained importance in today's world because of large expenses and time involved in labeling the unlabeled data by human experts. The proposed hybrid approach uses SVM and Label Propagation to label the unlabeled data. In the process, at each step SVM is trained to minimize the error and thus improve the prediction quality. Experiments are conducted by using SVM and logistic regression(Logreg). Results prove that SVM performs tremendously better than Logreg. The approach is tested using 12 datasets of different sizes ranging from the order of 1000s to the order of 10000s. Results show that the proposed approach outperforms Label Propagation by a large margin with F-measure of almost twice on average. The parallel version of the proposed approach is also designed and implemented, the analysis shows that the training time decreases significantly when parallel version is used.
[ { "created": "Wed, 2 Dec 2015 12:04:30 GMT", "version": "v1" } ]
2015-12-08
[ [ "Govada", "Aruna", "" ], [ "Joshi", "Pravin", "" ], [ "Mittal", "Sahil", "" ], [ "Sahay", "Sanjay K", "" ] ]
Semi supervised learning methods have gained importance in today's world because of large expenses and time involved in labeling the unlabeled data by human experts. The proposed hybrid approach uses SVM and Label Propagation to label the unlabeled data. In the process, at each step SVM is trained to minimize the error and thus improve the prediction quality. Experiments are conducted by using SVM and logistic regression(Logreg). Results prove that SVM performs tremendously better than Logreg. The approach is tested using 12 datasets of different sizes ranging from the order of 1000s to the order of 10000s. Results show that the proposed approach outperforms Label Propagation by a large margin with F-measure of almost twice on average. The parallel version of the proposed approach is also designed and implemented, the analysis shows that the training time decreases significantly when parallel version is used.
1012.1547
Martin Hoefer
Martin Hoefer, Michal Penn, Maria Polukarov, Alexander Skopalik, Berhold V\"ocking
Considerate Equilibrium
12 pages, 1 figure
null
null
null
cs.GT cs.DS cs.MA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider the existence and computational complexity of coalitional stability concepts based on social networks. Our concepts represent a natural and rich combinatorial generalization of a recent approach termed partition equilibrium. We assume that players in a strategic game are embedded in a social network, and there are coordination constraints that restrict the potential coalitions that can jointly deviate in the game to the set of cliques in the social network. In addition, players act in a "considerate" fashion to ignore potentially profitable (group) deviations if the change in their strategy may cause a decrease of utility to their neighbors. We study the properties of such considerate equilibria in application to the class of resource selection games (RSG). Our main result proves existence of a considerate equilibrium in all symmetric RSG with strictly increasing delays, for any social network among the players. The existence proof is constructive and yields an efficient algorithm. In fact, the computed considerate equilibrium is a Nash equilibrium for the standard RSG showing that there exists a state that is stable against selfish and considerate behavior simultaneously. In addition, we show results on convergence of considerate dynamics.
[ { "created": "Tue, 7 Dec 2010 16:44:20 GMT", "version": "v1" } ]
2010-12-08
[ [ "Hoefer", "Martin", "" ], [ "Penn", "Michal", "" ], [ "Polukarov", "Maria", "" ], [ "Skopalik", "Alexander", "" ], [ "Vöcking", "Berhold", "" ] ]
We consider the existence and computational complexity of coalitional stability concepts based on social networks. Our concepts represent a natural and rich combinatorial generalization of a recent approach termed partition equilibrium. We assume that players in a strategic game are embedded in a social network, and there are coordination constraints that restrict the potential coalitions that can jointly deviate in the game to the set of cliques in the social network. In addition, players act in a "considerate" fashion to ignore potentially profitable (group) deviations if the change in their strategy may cause a decrease of utility to their neighbors. We study the properties of such considerate equilibria in application to the class of resource selection games (RSG). Our main result proves existence of a considerate equilibrium in all symmetric RSG with strictly increasing delays, for any social network among the players. The existence proof is constructive and yields an efficient algorithm. In fact, the computed considerate equilibrium is a Nash equilibrium for the standard RSG showing that there exists a state that is stable against selfish and considerate behavior simultaneously. In addition, we show results on convergence of considerate dynamics.
1303.2553
Yi-ying Tseng
Jen-Yeu Chen, Yi-ying Tseng
Distributed Intrusion Detection of Byzantine Attacks in Wireless Networks with Random Linear Network Coding
null
International Journal of Distributed Sensor Networks, Volume 2012 (2012), Article ID 758340, 10 pages
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Network coding is an elegant technique where, instead of simply relaying the packets of information they receive, the nodes of a network are allowed to combine \emph{several} packets together for transmission and this technique can be used to achieve the maximum possible information flow in a network and save the needed number of packet transmissions. Moreover, in an energy-constraint wireless network such as Wireless Sensor Network (a typical type of wireless ad hoc network), applying network coding to reduce the number of wireless transmissions can also prolong the life time of sensor nodes. Although applying network coding in a wireless sensor network is obviously beneficial, due to the operation that one transmitting information is actually combination of multiple other information, it is possible that an error propagation may occur in the network. This special characteristic also exposes network coding system to a wide range of error attacks, especially Byzantine attacks. When some adversary nodes generate error data in the network with network coding, those erroneous information will be mixed at intermeidate nodes and thus corrupt all the information reaching a destination. Recent research efforts have shown that network coding can be combined with classical error control codes and cryptography for secure communication or misbehavior detection. Nevertheless, when it comes to Byzantine attacks, these results have limited effect. In fact, unless we find out those adversary nodes and isolate them, network coding may perform much worse than pure routing in the presence of malicious nodes. In this paper, a distributed hierarchical algorithm based on random linear network coding is developed to detect, locate and isolate malicious nodes.
[ { "created": "Mon, 11 Mar 2013 15:50:51 GMT", "version": "v1" } ]
2013-03-12
[ [ "Chen", "Jen-Yeu", "" ], [ "Tseng", "Yi-ying", "" ] ]
Network coding is an elegant technique where, instead of simply relaying the packets of information they receive, the nodes of a network are allowed to combine \emph{several} packets together for transmission and this technique can be used to achieve the maximum possible information flow in a network and save the needed number of packet transmissions. Moreover, in an energy-constraint wireless network such as Wireless Sensor Network (a typical type of wireless ad hoc network), applying network coding to reduce the number of wireless transmissions can also prolong the life time of sensor nodes. Although applying network coding in a wireless sensor network is obviously beneficial, due to the operation that one transmitting information is actually combination of multiple other information, it is possible that an error propagation may occur in the network. This special characteristic also exposes network coding system to a wide range of error attacks, especially Byzantine attacks. When some adversary nodes generate error data in the network with network coding, those erroneous information will be mixed at intermeidate nodes and thus corrupt all the information reaching a destination. Recent research efforts have shown that network coding can be combined with classical error control codes and cryptography for secure communication or misbehavior detection. Nevertheless, when it comes to Byzantine attacks, these results have limited effect. In fact, unless we find out those adversary nodes and isolate them, network coding may perform much worse than pure routing in the presence of malicious nodes. In this paper, a distributed hierarchical algorithm based on random linear network coding is developed to detect, locate and isolate malicious nodes.
1806.08946
Ze Wang
Jingyuan Wang, Ze Wang, Jianfeng Li, Junjie Wu
Multilevel Wavelet Decomposition Network for Interpretable Time Series Analysis
null
null
null
null
cs.LG eess.SP stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent years have witnessed the unprecedented rising of time series from almost all kindes of academic and industrial fields. Various types of deep neural network models have been introduced to time series analysis, but the important frequency information is yet lack of effective modeling. In light of this, in this paper we propose a wavelet-based neural network structure called multilevel Wavelet Decomposition Network (mWDN) for building frequency-aware deep learning models for time series analysis. mWDN preserves the advantage of multilevel discrete wavelet decomposition in frequency learning while enables the fine-tuning of all parameters under a deep neural network framework. Based on mWDN, we further propose two deep learning models called Residual Classification Flow (RCF) and multi-frequecy Long Short-Term Memory (mLSTM) for time series classification and forecasting, respectively. The two models take all or partial mWDN decomposed sub-series in different frequencies as input, and resort to the back propagation algorithm to learn all the parameters globally, which enables seamless embedding of wavelet-based frequency analysis into deep learning frameworks. Extensive experiments on 40 UCR datasets and a real-world user volume dataset demonstrate the excellent performance of our time series models based on mWDN. In particular, we propose an importance analysis method to mWDN based models, which successfully identifies those time-series elements and mWDN layers that are crucially important to time series analysis. This indeed indicates the interpretability advantage of mWDN, and can be viewed as an indepth exploration to interpretable deep learning.
[ { "created": "Sat, 23 Jun 2018 11:12:12 GMT", "version": "v1" } ]
2018-06-26
[ [ "Wang", "Jingyuan", "" ], [ "Wang", "Ze", "" ], [ "Li", "Jianfeng", "" ], [ "Wu", "Junjie", "" ] ]
Recent years have witnessed the unprecedented rising of time series from almost all kindes of academic and industrial fields. Various types of deep neural network models have been introduced to time series analysis, but the important frequency information is yet lack of effective modeling. In light of this, in this paper we propose a wavelet-based neural network structure called multilevel Wavelet Decomposition Network (mWDN) for building frequency-aware deep learning models for time series analysis. mWDN preserves the advantage of multilevel discrete wavelet decomposition in frequency learning while enables the fine-tuning of all parameters under a deep neural network framework. Based on mWDN, we further propose two deep learning models called Residual Classification Flow (RCF) and multi-frequecy Long Short-Term Memory (mLSTM) for time series classification and forecasting, respectively. The two models take all or partial mWDN decomposed sub-series in different frequencies as input, and resort to the back propagation algorithm to learn all the parameters globally, which enables seamless embedding of wavelet-based frequency analysis into deep learning frameworks. Extensive experiments on 40 UCR datasets and a real-world user volume dataset demonstrate the excellent performance of our time series models based on mWDN. In particular, we propose an importance analysis method to mWDN based models, which successfully identifies those time-series elements and mWDN layers that are crucially important to time series analysis. This indeed indicates the interpretability advantage of mWDN, and can be viewed as an indepth exploration to interpretable deep learning.
2012.07030
Pan Cunhua
Kangda Zhi, Cunhua Pan, Hong Ren and Kezhi Wang
Statistical CSI-based Design for Reconfigurable Intelligent Surface-aided Massive MIMO Systems with Direct Links
Accepted by IEEE Wireless Communications Letters. Keywords: Intelligent Reflecting Surface (IRS), reconfigurable intelligent surface (RIS)
null
null
null
cs.IT eess.SP math.IT
http://creativecommons.org/licenses/by/4.0/
This paper investigates the performance of reconfigurable intelligent surface (RIS)-aided massive multiple-input multiple-output (MIMO) systems with direct links, and the phase shifts of the RIS are designed based on the statistical channel state information (CSI). We first derive the closed-form expression of the uplink ergodic data rate. Then, based on the derived expression, we use the genetic algorithm (GA) to solve the sum data rate maximization problem. With low-complexity maximal-ratio combination (MRC) and low-overhead statistical CSI-based scheme, we validate that the RIS can still bring significant performance gains to traditional massive MIMO systems.
[ { "created": "Sun, 13 Dec 2020 10:50:06 GMT", "version": "v1" }, { "created": "Tue, 15 Dec 2020 02:13:05 GMT", "version": "v2" }, { "created": "Mon, 15 Feb 2021 12:49:32 GMT", "version": "v3" } ]
2021-02-16
[ [ "Zhi", "Kangda", "" ], [ "Pan", "Cunhua", "" ], [ "Ren", "Hong", "" ], [ "Wang", "Kezhi", "" ] ]
This paper investigates the performance of reconfigurable intelligent surface (RIS)-aided massive multiple-input multiple-output (MIMO) systems with direct links, and the phase shifts of the RIS are designed based on the statistical channel state information (CSI). We first derive the closed-form expression of the uplink ergodic data rate. Then, based on the derived expression, we use the genetic algorithm (GA) to solve the sum data rate maximization problem. With low-complexity maximal-ratio combination (MRC) and low-overhead statistical CSI-based scheme, we validate that the RIS can still bring significant performance gains to traditional massive MIMO systems.
2208.14637
Lucas Morillo-Mendez
Tim Schreiter, Lucas Morillo-Mendez, Ravi T. Chadalavada, Andrey Rudenko, Erik Alexander Billing, and Achim J. Lilienthal
The Effect of Anthropomorphism on Trust in an Industrial Human-Robot Interaction
in SCRITA Workshop Proceedings (arXiv:2208.11090) held in conjunction with 31st IEEE International Conference on Robot & Human Interactive Communication, 29/08 - 02/09 2022, Naples (Italy)
null
null
SCRITA/2022/3783
cs.RO cs.HC
http://creativecommons.org/licenses/by-sa/4.0/
Robots are increasingly deployed in spaces shared with humans, including home settings and industrial environments. In these environments, the interaction between humans and robots (HRI) is crucial for safety, legibility, and efficiency. A key factor in HRI is trust, which modulates the acceptance of the system. Anthropomorphism has been shown to modulate trust development in a robot, but robots in industrial environments are not usually anthropomorphic. We designed a simple interaction in an industrial environment in which an anthropomorphic mock driver (ARMoD) robot simulates to drive an autonomous guided vehicle (AGV). The task consisted of a human crossing paths with the AGV, with or without the ARMoD mounted on the top, in a narrow corridor. The human and the system needed to negotiate trajectories when crossing paths, meaning that the human had to attend to the trajectory of the robot to avoid a collision with it. There was a significant increment in the reported trust scores in the condition where the ARMoD was present, showing that the presence of an anthropomorphic robot is enough to modulate the trust, even in limited interactions as the one we present here.
[ { "created": "Wed, 31 Aug 2022 05:19:40 GMT", "version": "v1" }, { "created": "Thu, 1 Sep 2022 14:35:07 GMT", "version": "v2" } ]
2022-09-02
[ [ "Schreiter", "Tim", "" ], [ "Morillo-Mendez", "Lucas", "" ], [ "Chadalavada", "Ravi T.", "" ], [ "Rudenko", "Andrey", "" ], [ "Billing", "Erik Alexander", "" ], [ "Lilienthal", "Achim J.", "" ] ]
Robots are increasingly deployed in spaces shared with humans, including home settings and industrial environments. In these environments, the interaction between humans and robots (HRI) is crucial for safety, legibility, and efficiency. A key factor in HRI is trust, which modulates the acceptance of the system. Anthropomorphism has been shown to modulate trust development in a robot, but robots in industrial environments are not usually anthropomorphic. We designed a simple interaction in an industrial environment in which an anthropomorphic mock driver (ARMoD) robot simulates to drive an autonomous guided vehicle (AGV). The task consisted of a human crossing paths with the AGV, with or without the ARMoD mounted on the top, in a narrow corridor. The human and the system needed to negotiate trajectories when crossing paths, meaning that the human had to attend to the trajectory of the robot to avoid a collision with it. There was a significant increment in the reported trust scores in the condition where the ARMoD was present, showing that the presence of an anthropomorphic robot is enough to modulate the trust, even in limited interactions as the one we present here.
1006.4035
Uwe Aickelin
Peer-Olaf Siebers, Uwe Aickelin, Helen Celia, Chris Clegg
Towards the Development of a Simulator for Investigating the Impact of People Management Practices on Retail Performance
24 pages, 7 figures, 6 tables, Journal of Simulation 2010
null
null
null
cs.AI cs.CE cs.MA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Often models for understanding the impact of management practices on retail performance are developed under the assumption of stability, equilibrium and linearity, whereas retail operations are considered in reality to be dynamic, non-linear and complex. Alternatively, discrete event and agent-based modelling are approaches that allow the development of simulation models of heterogeneous non-equilibrium systems for testing out different scenarios. When developing simulation models one has to abstract and simplify from the real world, which means that one has to try and capture the 'essence' of the system required for developing a representation of the mechanisms that drive the progression in the real system. Simulation models can be developed at different levels of abstraction. To know the appropriate level of abstraction for a specific application is often more of an art than a science. We have developed a retail branch simulation model to investigate which level of model accuracy is required for such a model to obtain meaningful results for practitioners.
[ { "created": "Mon, 21 Jun 2010 11:23:23 GMT", "version": "v1" } ]
2010-07-05
[ [ "Siebers", "Peer-Olaf", "" ], [ "Aickelin", "Uwe", "" ], [ "Celia", "Helen", "" ], [ "Clegg", "Chris", "" ] ]
Often models for understanding the impact of management practices on retail performance are developed under the assumption of stability, equilibrium and linearity, whereas retail operations are considered in reality to be dynamic, non-linear and complex. Alternatively, discrete event and agent-based modelling are approaches that allow the development of simulation models of heterogeneous non-equilibrium systems for testing out different scenarios. When developing simulation models one has to abstract and simplify from the real world, which means that one has to try and capture the 'essence' of the system required for developing a representation of the mechanisms that drive the progression in the real system. Simulation models can be developed at different levels of abstraction. To know the appropriate level of abstraction for a specific application is often more of an art than a science. We have developed a retail branch simulation model to investigate which level of model accuracy is required for such a model to obtain meaningful results for practitioners.
2105.07596
Zhepei Wang
Zhepei Wang, Jonah Casebeer, Adam Clemmitt, Efthymios Tzinis, Paris Smaragdis
Sound Event Detection with Adaptive Frequency Selection
Accepted by IEEE Workshop on Applications of Signal Processing to Audio and Acoustics 2021
null
null
null
cs.SD eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work, we present HIDACT, a novel network architecture for adaptive computation for efficiently recognizing acoustic events. We evaluate the model on a sound event detection task where we train it to adaptively process frequency bands. The model learns to adapt to the input without requesting all frequency sub-bands provided. It can make confident predictions within fewer processing steps, hence reducing the amount of computation. Experimental results show that HIDACT has comparable performance to baseline models with more parameters and higher computational complexity. Furthermore, the model can adjust the amount of computation based on the data and computational budget.
[ { "created": "Mon, 17 May 2021 03:57:33 GMT", "version": "v1" }, { "created": "Thu, 29 Jul 2021 05:02:59 GMT", "version": "v2" } ]
2021-07-30
[ [ "Wang", "Zhepei", "" ], [ "Casebeer", "Jonah", "" ], [ "Clemmitt", "Adam", "" ], [ "Tzinis", "Efthymios", "" ], [ "Smaragdis", "Paris", "" ] ]
In this work, we present HIDACT, a novel network architecture for adaptive computation for efficiently recognizing acoustic events. We evaluate the model on a sound event detection task where we train it to adaptively process frequency bands. The model learns to adapt to the input without requesting all frequency sub-bands provided. It can make confident predictions within fewer processing steps, hence reducing the amount of computation. Experimental results show that HIDACT has comparable performance to baseline models with more parameters and higher computational complexity. Furthermore, the model can adjust the amount of computation based on the data and computational budget.
2010.13631
Yiwen Liao
Yiwen Liao, Rapha\"el Latty, Bin Yang
Feature Selection Using Batch-Wise Attenuation and Feature Mask Normalization
accepted by IJCNN2021
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Feature selection is generally used as one of the most important preprocessing techniques in machine learning, as it helps to reduce the dimensionality of data and assists researchers and practitioners in understanding data. Thereby, by utilizing feature selection, better performance and reduced computational consumption, memory complexity and even data amount can be expected. Although there exist approaches leveraging the power of deep neural networks to carry out feature selection, many of them often suffer from sensitive hyperparameters. This paper proposes a feature mask module (FM-module) for feature selection based on a novel batch-wise attenuation and feature mask normalization. The proposed method is almost free from hyperparameters and can be easily integrated into common neural networks as an embedded feature selection method. Experiments on popular image, text and speech datasets have shown that our approach is easy to use and has superior performance in comparison with other state-of-the-art deep-learning-based feature selection methods.
[ { "created": "Mon, 26 Oct 2020 14:46:38 GMT", "version": "v1" }, { "created": "Mon, 8 Mar 2021 13:02:04 GMT", "version": "v2" }, { "created": "Fri, 23 Apr 2021 14:28:38 GMT", "version": "v3" } ]
2021-04-26
[ [ "Liao", "Yiwen", "" ], [ "Latty", "Raphaël", "" ], [ "Yang", "Bin", "" ] ]
Feature selection is generally used as one of the most important preprocessing techniques in machine learning, as it helps to reduce the dimensionality of data and assists researchers and practitioners in understanding data. Thereby, by utilizing feature selection, better performance and reduced computational consumption, memory complexity and even data amount can be expected. Although there exist approaches leveraging the power of deep neural networks to carry out feature selection, many of them often suffer from sensitive hyperparameters. This paper proposes a feature mask module (FM-module) for feature selection based on a novel batch-wise attenuation and feature mask normalization. The proposed method is almost free from hyperparameters and can be easily integrated into common neural networks as an embedded feature selection method. Experiments on popular image, text and speech datasets have shown that our approach is easy to use and has superior performance in comparison with other state-of-the-art deep-learning-based feature selection methods.
2207.09095
Meng Hua
Meng Hua, Qingqing Wu, Wen Chen, Octavia A. Dobre, A. Lee Swindlehurst
Secure Intelligent Reflecting Surface Aided Integrated Sensing and Communication
This paper has been submitted to IEEE journal for possible publication
null
null
null
cs.IT eess.SP math.IT
http://creativecommons.org/licenses/by/4.0/
In this paper, an intelligent reflecting surface (IRS) is leveraged to enhance the physical layer security of an integrated sensing and communication (ISAC) system in which the IRS is deployed to not only assist the downlink communication for multiple users, but also create a virtual line-of-sight (LoS) link for target sensing. In particular, we consider a challenging scenario where the target may be a suspicious eavesdropper that potentially intercepts the communication-user information transmitted by the base station (BS). We investigate the joint design of the phase shifts at the IRS and the communication as well as radar beamformers at the BS to maximize the sensing beampattern gain towards the target, subject to the maximum information leakage to the eavesdropping target and the minimum signal-to-interference-plus-noise ratio (SINR) required by users. Based on the availability of perfect channel state information (CSI) of all involved user links and the accurate target location at the BS, two scenarios are considered and two different optimization algorithms are proposed. For the ideal scenario where the CSI of the user links and the target location are perfectly known at the BS, a penalty-based algorithm is proposed to obtain a high-quality solution. In particular, the beamformers are obtained with a semi-closed-form solution using Lagrange duality and the IRS phase shifts are solved for in closed form by applying the majorization-minimization (MM) method. On the other hand, for the more practical scenario where the CSI is imperfect and the target location is uncertain, a robust algorithm based on the $\cal S$-procedure and sign-definiteness approaches is proposed. Simulation results demonstrate the effectiveness of the proposed scheme in achieving a trade-off between the communication quality and the sensing quality.
[ { "created": "Tue, 19 Jul 2022 06:15:22 GMT", "version": "v1" } ]
2022-07-20
[ [ "Hua", "Meng", "" ], [ "Wu", "Qingqing", "" ], [ "Chen", "Wen", "" ], [ "Dobre", "Octavia A.", "" ], [ "Swindlehurst", "A. Lee", "" ] ]
In this paper, an intelligent reflecting surface (IRS) is leveraged to enhance the physical layer security of an integrated sensing and communication (ISAC) system in which the IRS is deployed to not only assist the downlink communication for multiple users, but also create a virtual line-of-sight (LoS) link for target sensing. In particular, we consider a challenging scenario where the target may be a suspicious eavesdropper that potentially intercepts the communication-user information transmitted by the base station (BS). We investigate the joint design of the phase shifts at the IRS and the communication as well as radar beamformers at the BS to maximize the sensing beampattern gain towards the target, subject to the maximum information leakage to the eavesdropping target and the minimum signal-to-interference-plus-noise ratio (SINR) required by users. Based on the availability of perfect channel state information (CSI) of all involved user links and the accurate target location at the BS, two scenarios are considered and two different optimization algorithms are proposed. For the ideal scenario where the CSI of the user links and the target location are perfectly known at the BS, a penalty-based algorithm is proposed to obtain a high-quality solution. In particular, the beamformers are obtained with a semi-closed-form solution using Lagrange duality and the IRS phase shifts are solved for in closed form by applying the majorization-minimization (MM) method. On the other hand, for the more practical scenario where the CSI is imperfect and the target location is uncertain, a robust algorithm based on the $\cal S$-procedure and sign-definiteness approaches is proposed. Simulation results demonstrate the effectiveness of the proposed scheme in achieving a trade-off between the communication quality and the sensing quality.
2203.01680
Eduardo Esmanhotto
E. Esmanhotto, T. Hirtzlin, N. Castellani, S. Martin, B. Giraud, F. Andrieu, J.F. Nodin, D. Querlioz, J-M. Portal and E. Vianello
Experimental demonstration of Single-Level and Multi-Level-Cell RRAM-based In-Memory Computing with up to 16 parallel operations
Preprint for IRPS2022
null
null
null
cs.ET
http://creativecommons.org/licenses/by-nc-nd/4.0/
Crossbar arrays of resistive memories (RRAM) hold the promise of enabling In-Memory Computing (IMC), but essential challenges due to the impact of device imperfection and device endurance have yet to be overcome. In this work, we demonstrate experimentally an RRAM-based IMC logic concept with strong resilience to RRAM variability, even after one million endurance cycles. Our work relies on a generalization of the concept of in-memory Scouting Logic, and we demonstrate it experimentally with up to 16 parallel devices (operands), a new milestone for RRAM in-memory logic. Moreover, we combine IMC with Multi-Level-Cell programming and demonstrate experimentally, for the first time, an IMC RRAM-based MLC 2-bit adder.
[ { "created": "Thu, 3 Mar 2022 12:38:12 GMT", "version": "v1" } ]
2022-03-04
[ [ "Esmanhotto", "E.", "" ], [ "Hirtzlin", "T.", "" ], [ "Castellani", "N.", "" ], [ "Martin", "S.", "" ], [ "Giraud", "B.", "" ], [ "Andrieu", "F.", "" ], [ "Nodin", "J. F.", "" ], [ "Querlioz", "D.", "" ], [ "Portal", "J-M.", "" ], [ "Vianello", "E.", "" ] ]
Crossbar arrays of resistive memories (RRAM) hold the promise of enabling In-Memory Computing (IMC), but essential challenges due to the impact of device imperfection and device endurance have yet to be overcome. In this work, we demonstrate experimentally an RRAM-based IMC logic concept with strong resilience to RRAM variability, even after one million endurance cycles. Our work relies on a generalization of the concept of in-memory Scouting Logic, and we demonstrate it experimentally with up to 16 parallel devices (operands), a new milestone for RRAM in-memory logic. Moreover, we combine IMC with Multi-Level-Cell programming and demonstrate experimentally, for the first time, an IMC RRAM-based MLC 2-bit adder.
1810.09610
EPTCS
Eric C.R. Hehner (University of Toronto)
A Theory of Lazy Imperative Timing
In Proceedings Refine 2018, arXiv:1810.08739
EPTCS 282, 2018, pp. 1-9
10.4204/EPTCS.282.1
null
cs.PL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a theory of lazy imperative timing.
[ { "created": "Tue, 23 Oct 2018 00:47:36 GMT", "version": "v1" } ]
2018-10-24
[ [ "Hehner", "Eric C. R.", "", "University of Toronto" ] ]
We present a theory of lazy imperative timing.
2309.08680
Joao P. A. Dantas
Joao P. A. Dantas, Diego Geraldo, Andre N. Costa, Marcos R. O. A. Maximo, Takashi Yoneyama
ASA-SimaaS: Advancing Digital Transformation through Simulation Services in the Brazilian Air Force
null
null
null
null
cs.CY cs.ET
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This work explores the use of military simulations in predicting and evaluating the outcomes of potential scenarios. It highlights the evolution of military simulations and the increased capabilities that have arisen due to the advancement of artificial intelligence. Also, it discusses the various applications of military simulations, such as developing tactics and employment doctrines, training decision-makers, evaluating new acquisitions, and developing new technologies. The paper then focuses on the Brazilian Air Force's efforts to create its own simulation tool, the Aerospace Simulation Environment (Ambiente de Simula\c{c}\~ao Aeroespacial -- ASA in Portuguese), and how this cloud-based service called ASA Simulation as a Service (ASA-SimaaS) can provide greater autonomy and economy for the military force. The main contribution of this work is to present the ASA-SimaaS solution as a means of empowering digital transformation in defense scenarios, establishing a partnership network, and improving the military's simulation capabilities and competitiveness.
[ { "created": "Fri, 15 Sep 2023 18:10:13 GMT", "version": "v1" } ]
2023-09-19
[ [ "Dantas", "Joao P. A.", "" ], [ "Geraldo", "Diego", "" ], [ "Costa", "Andre N.", "" ], [ "Maximo", "Marcos R. O. A.", "" ], [ "Yoneyama", "Takashi", "" ] ]
This work explores the use of military simulations in predicting and evaluating the outcomes of potential scenarios. It highlights the evolution of military simulations and the increased capabilities that have arisen due to the advancement of artificial intelligence. Also, it discusses the various applications of military simulations, such as developing tactics and employment doctrines, training decision-makers, evaluating new acquisitions, and developing new technologies. The paper then focuses on the Brazilian Air Force's efforts to create its own simulation tool, the Aerospace Simulation Environment (Ambiente de Simula\c{c}\~ao Aeroespacial -- ASA in Portuguese), and how this cloud-based service called ASA Simulation as a Service (ASA-SimaaS) can provide greater autonomy and economy for the military force. The main contribution of this work is to present the ASA-SimaaS solution as a means of empowering digital transformation in defense scenarios, establishing a partnership network, and improving the military's simulation capabilities and competitiveness.
2002.07033
Byungsoo Kim
Youngduck Choi, Youngnam Lee, Junghyun Cho, Jineon Baek, Byungsoo Kim, Yeongmin Cha, Dongmin Shin, Chan Bae, Jaewe Heo
Towards an Appropriate Query, Key, and Value Computation for Knowledge Tracing
L@S 2020
null
10.1145/3448139.3448188
null
cs.LG cs.AI cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Knowledge tracing, the act of modeling a student's knowledge through learning activities, is an extensively studied problem in the field of computer-aided education. Although models with attention mechanism have outperformed traditional approaches such as Bayesian knowledge tracing and collaborative filtering, they share two limitations. Firstly, the models rely on shallow attention layers and fail to capture complex relations among exercises and responses over time. Secondly, different combinations of queries, keys and values for the self-attention layer for knowledge tracing were not extensively explored. Usual practice of using exercises and interactions (exercise-response pairs) as queries and keys/values respectively lacks empirical support. In this paper, we propose a novel Transformer based model for knowledge tracing, SAINT: Separated Self-AttentIve Neural Knowledge Tracing. SAINT has an encoder-decoder structure where exercise and response embedding sequence separately enter the encoder and the decoder respectively, which allows to stack attention layers multiple times. To the best of our knowledge, this is the first work to suggest an encoder-decoder model for knowledge tracing that applies deep self-attentive layers to exercises and responses separately. The empirical evaluations on a large-scale knowledge tracing dataset show that SAINT achieves the state-of-the-art performance in knowledge tracing with the improvement of AUC by 1.8% compared to the current state-of-the-art models.
[ { "created": "Fri, 14 Feb 2020 09:21:19 GMT", "version": "v1" }, { "created": "Thu, 25 Jun 2020 05:14:09 GMT", "version": "v2" }, { "created": "Wed, 1 Jul 2020 06:57:13 GMT", "version": "v3" }, { "created": "Tue, 25 Aug 2020 01:02:22 GMT", "version": "v4" }, { "created": "Mon, 1 Feb 2021 02:42:50 GMT", "version": "v5" } ]
2021-02-02
[ [ "Choi", "Youngduck", "" ], [ "Lee", "Youngnam", "" ], [ "Cho", "Junghyun", "" ], [ "Baek", "Jineon", "" ], [ "Kim", "Byungsoo", "" ], [ "Cha", "Yeongmin", "" ], [ "Shin", "Dongmin", "" ], [ "Bae", "Chan", "" ], [ "Heo", "Jaewe", "" ] ]
Knowledge tracing, the act of modeling a student's knowledge through learning activities, is an extensively studied problem in the field of computer-aided education. Although models with attention mechanism have outperformed traditional approaches such as Bayesian knowledge tracing and collaborative filtering, they share two limitations. Firstly, the models rely on shallow attention layers and fail to capture complex relations among exercises and responses over time. Secondly, different combinations of queries, keys and values for the self-attention layer for knowledge tracing were not extensively explored. Usual practice of using exercises and interactions (exercise-response pairs) as queries and keys/values respectively lacks empirical support. In this paper, we propose a novel Transformer based model for knowledge tracing, SAINT: Separated Self-AttentIve Neural Knowledge Tracing. SAINT has an encoder-decoder structure where exercise and response embedding sequence separately enter the encoder and the decoder respectively, which allows to stack attention layers multiple times. To the best of our knowledge, this is the first work to suggest an encoder-decoder model for knowledge tracing that applies deep self-attentive layers to exercises and responses separately. The empirical evaluations on a large-scale knowledge tracing dataset show that SAINT achieves the state-of-the-art performance in knowledge tracing with the improvement of AUC by 1.8% compared to the current state-of-the-art models.
2303.14550
Yufan Huang
Yufan Huang, C. Seshadhri, David F. Gleich
Theoretical bounds on the network community profile from low-rank semi-definite programming
null
null
null
null
cs.SI math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study a new connection between a technical measure called $\mu$-conductance that arises in the study of Markov chains for sampling convex bodies and the network community profile that characterizes size-resolved properties of clusters and communities in social and information networks. The idea of $\mu$-conductance is similar to the traditional graph conductance, but disregards sets with small volume. We derive a sequence of optimization problems including a low-rank semi-definite program from which we can derive a lower bound on the optimal $\mu$-conductance value. These ideas give the first theoretically sound bound on the behavior of the network community profile for a wide range of cluster sizes. The algorithm scales up to graphs with hundreds of thousands of nodes and we demonstrate how our framework validates the predicted structures of real-world graphs.
[ { "created": "Sat, 25 Mar 2023 19:58:18 GMT", "version": "v1" } ]
2023-03-28
[ [ "Huang", "Yufan", "" ], [ "Seshadhri", "C.", "" ], [ "Gleich", "David F.", "" ] ]
We study a new connection between a technical measure called $\mu$-conductance that arises in the study of Markov chains for sampling convex bodies and the network community profile that characterizes size-resolved properties of clusters and communities in social and information networks. The idea of $\mu$-conductance is similar to the traditional graph conductance, but disregards sets with small volume. We derive a sequence of optimization problems including a low-rank semi-definite program from which we can derive a lower bound on the optimal $\mu$-conductance value. These ideas give the first theoretically sound bound on the behavior of the network community profile for a wide range of cluster sizes. The algorithm scales up to graphs with hundreds of thousands of nodes and we demonstrate how our framework validates the predicted structures of real-world graphs.
2312.13634
Dale Miller
Matteo Manighetti and Dale Miller
Peano Arithmetic and $\mu$MALL
21 pages
null
null
null
cs.LO
http://creativecommons.org/licenses/by/4.0/
Formal theories of arithmetic have traditionally been based on either classical or intuitionistic logic, leading to the development of Peano and Heyting arithmetic, respectively. We propose a use $\mu$MALL as a formal theory of arithmetic based on linear logic. This formal system is presented as a sequent calculus proof system that extends the standard proof system for multiplicative-additive linear logic (MALL) with the addition of the logical connectives universal and existential quantifiers (first-order quantifiers), term equality and non-equality, and the least and greatest fixed point operators. We first demonstrate how functions defined using $\mu$MALL relational specifications can be computed using a simple proof search algorithm. By incorporating weakening and contraction into $\mu$MALL, we obtain $\mu$LK+, a natural candidate for a classical sequent calculus for arithmetic. While important proof theory results are still lacking for $\mu$LK+ (including cut-elimination and the completeness of focusing), we prove that $\mu$LK+ is consistent and that it contains Peano arithmetic. We also prove two conservativity results regarding $\mu$LK+ over $\mu$MALL.
[ { "created": "Thu, 21 Dec 2023 07:50:18 GMT", "version": "v1" } ]
2023-12-22
[ [ "Manighetti", "Matteo", "" ], [ "Miller", "Dale", "" ] ]
Formal theories of arithmetic have traditionally been based on either classical or intuitionistic logic, leading to the development of Peano and Heyting arithmetic, respectively. We propose a use $\mu$MALL as a formal theory of arithmetic based on linear logic. This formal system is presented as a sequent calculus proof system that extends the standard proof system for multiplicative-additive linear logic (MALL) with the addition of the logical connectives universal and existential quantifiers (first-order quantifiers), term equality and non-equality, and the least and greatest fixed point operators. We first demonstrate how functions defined using $\mu$MALL relational specifications can be computed using a simple proof search algorithm. By incorporating weakening and contraction into $\mu$MALL, we obtain $\mu$LK+, a natural candidate for a classical sequent calculus for arithmetic. While important proof theory results are still lacking for $\mu$LK+ (including cut-elimination and the completeness of focusing), we prove that $\mu$LK+ is consistent and that it contains Peano arithmetic. We also prove two conservativity results regarding $\mu$LK+ over $\mu$MALL.
2406.19317
Parand A. Alamdari
Parand A. Alamdari, Yanshuai Cao, Kevin H. Wilson
Jump Starting Bandits with LLM-Generated Prior Knowledge
null
null
null
null
cs.LG cs.AI cs.CL
http://creativecommons.org/licenses/by-nc-sa/4.0/
We present substantial evidence demonstrating the benefits of integrating Large Language Models (LLMs) with a Contextual Multi-Armed Bandit framework. Contextual bandits have been widely used in recommendation systems to generate personalized suggestions based on user-specific contexts. We show that LLMs, pre-trained on extensive corpora rich in human knowledge and preferences, can simulate human behaviours well enough to jump-start contextual multi-armed bandits to reduce online learning regret. We propose an initialization algorithm for contextual bandits by prompting LLMs to produce a pre-training dataset of approximate human preferences for the bandit. This significantly reduces online learning regret and data-gathering costs for training such models. Our approach is validated empirically through two sets of experiments with different bandit setups: one which utilizes LLMs to serve as an oracle and a real-world experiment utilizing data from a conjoint survey experiment.
[ { "created": "Thu, 27 Jun 2024 16:52:19 GMT", "version": "v1" } ]
2024-06-28
[ [ "Alamdari", "Parand A.", "" ], [ "Cao", "Yanshuai", "" ], [ "Wilson", "Kevin H.", "" ] ]
We present substantial evidence demonstrating the benefits of integrating Large Language Models (LLMs) with a Contextual Multi-Armed Bandit framework. Contextual bandits have been widely used in recommendation systems to generate personalized suggestions based on user-specific contexts. We show that LLMs, pre-trained on extensive corpora rich in human knowledge and preferences, can simulate human behaviours well enough to jump-start contextual multi-armed bandits to reduce online learning regret. We propose an initialization algorithm for contextual bandits by prompting LLMs to produce a pre-training dataset of approximate human preferences for the bandit. This significantly reduces online learning regret and data-gathering costs for training such models. Our approach is validated empirically through two sets of experiments with different bandit setups: one which utilizes LLMs to serve as an oracle and a real-world experiment utilizing data from a conjoint survey experiment.
2310.12036
Mohammad Gheshlaghi Azar
Mohammad Gheshlaghi Azar and Mark Rowland and Bilal Piot and Daniel Guo and Daniele Calandriello and Michal Valko and R\'emi Munos
A General Theoretical Paradigm to Understand Learning from Human Preferences
null
null
null
null
cs.AI cs.LG stat.ML
http://creativecommons.org/licenses/by-nc-sa/4.0/
The prevalent deployment of learning from human preferences through reinforcement learning (RLHF) relies on two important approximations: the first assumes that pairwise preferences can be substituted with pointwise rewards. The second assumes that a reward model trained on these pointwise rewards can generalize from collected data to out-of-distribution data sampled by the policy. Recently, Direct Preference Optimisation (DPO) has been proposed as an approach that bypasses the second approximation and learn directly a policy from collected data without the reward modelling stage. However, this method still heavily relies on the first approximation. In this paper we try to gain a deeper theoretical understanding of these practical algorithms. In particular we derive a new general objective called $\Psi$PO for learning from human preferences that is expressed in terms of pairwise preferences and therefore bypasses both approximations. This new general objective allows us to perform an in-depth analysis of the behavior of RLHF and DPO (as special cases of $\Psi$PO) and to identify their potential pitfalls. We then consider another special case for $\Psi$PO by setting $\Psi$ simply to Identity, for which we can derive an efficient optimisation procedure, prove performance guarantees and demonstrate its empirical superiority to DPO on some illustrative examples.
[ { "created": "Wed, 18 Oct 2023 15:21:28 GMT", "version": "v1" }, { "created": "Wed, 22 Nov 2023 00:02:49 GMT", "version": "v2" } ]
2023-11-23
[ [ "Azar", "Mohammad Gheshlaghi", "" ], [ "Rowland", "Mark", "" ], [ "Piot", "Bilal", "" ], [ "Guo", "Daniel", "" ], [ "Calandriello", "Daniele", "" ], [ "Valko", "Michal", "" ], [ "Munos", "Rémi", "" ] ]
The prevalent deployment of learning from human preferences through reinforcement learning (RLHF) relies on two important approximations: the first assumes that pairwise preferences can be substituted with pointwise rewards. The second assumes that a reward model trained on these pointwise rewards can generalize from collected data to out-of-distribution data sampled by the policy. Recently, Direct Preference Optimisation (DPO) has been proposed as an approach that bypasses the second approximation and learn directly a policy from collected data without the reward modelling stage. However, this method still heavily relies on the first approximation. In this paper we try to gain a deeper theoretical understanding of these practical algorithms. In particular we derive a new general objective called $\Psi$PO for learning from human preferences that is expressed in terms of pairwise preferences and therefore bypasses both approximations. This new general objective allows us to perform an in-depth analysis of the behavior of RLHF and DPO (as special cases of $\Psi$PO) and to identify their potential pitfalls. We then consider another special case for $\Psi$PO by setting $\Psi$ simply to Identity, for which we can derive an efficient optimisation procedure, prove performance guarantees and demonstrate its empirical superiority to DPO on some illustrative examples.
1701.02149
Wenpeng Yin
Wenpeng Yin and Hinrich Sch\"utze
Task-Specific Attentive Pooling of Phrase Alignments Contributes to Sentence Matching
EACL'2017 long paper. arXiv admin note: substantial text overlap with arXiv:1604.06896
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This work studies comparatively two typical sentence matching tasks: textual entailment (TE) and answer selection (AS), observing that weaker phrase alignments are more critical in TE, while stronger phrase alignments deserve more attention in AS. The key to reach this observation lies in phrase detection, phrase representation, phrase alignment, and more importantly how to connect those aligned phrases of different matching degrees with the final classifier. Prior work (i) has limitations in phrase generation and representation, or (ii) conducts alignment at word and phrase levels by handcrafted features or (iii) utilizes a single framework of alignment without considering the characteristics of specific tasks, which limits the framework's effectiveness across tasks. We propose an architecture based on Gated Recurrent Unit that supports (i) representation learning of phrases of arbitrary granularity and (ii) task-specific attentive pooling of phrase alignments between two sentences. Experimental results on TE and AS match our observation and show the effectiveness of our approach.
[ { "created": "Mon, 9 Jan 2017 12:03:11 GMT", "version": "v1" } ]
2017-01-10
[ [ "Yin", "Wenpeng", "" ], [ "Schütze", "Hinrich", "" ] ]
This work studies comparatively two typical sentence matching tasks: textual entailment (TE) and answer selection (AS), observing that weaker phrase alignments are more critical in TE, while stronger phrase alignments deserve more attention in AS. The key to reach this observation lies in phrase detection, phrase representation, phrase alignment, and more importantly how to connect those aligned phrases of different matching degrees with the final classifier. Prior work (i) has limitations in phrase generation and representation, or (ii) conducts alignment at word and phrase levels by handcrafted features or (iii) utilizes a single framework of alignment without considering the characteristics of specific tasks, which limits the framework's effectiveness across tasks. We propose an architecture based on Gated Recurrent Unit that supports (i) representation learning of phrases of arbitrary granularity and (ii) task-specific attentive pooling of phrase alignments between two sentences. Experimental results on TE and AS match our observation and show the effectiveness of our approach.
2301.09515
Axel Sauer
Axel Sauer, Tero Karras, Samuli Laine, Andreas Geiger, Timo Aila
StyleGAN-T: Unlocking the Power of GANs for Fast Large-Scale Text-to-Image Synthesis
Project page: https://sites.google.com/view/stylegan-t/
null
null
null
cs.LG cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Text-to-image synthesis has recently seen significant progress thanks to large pretrained language models, large-scale training data, and the introduction of scalable model families such as diffusion and autoregressive models. However, the best-performing models require iterative evaluation to generate a single sample. In contrast, generative adversarial networks (GANs) only need a single forward pass. They are thus much faster, but they currently remain far behind the state-of-the-art in large-scale text-to-image synthesis. This paper aims to identify the necessary steps to regain competitiveness. Our proposed model, StyleGAN-T, addresses the specific requirements of large-scale text-to-image synthesis, such as large capacity, stable training on diverse datasets, strong text alignment, and controllable variation vs. text alignment tradeoff. StyleGAN-T significantly improves over previous GANs and outperforms distilled diffusion models - the previous state-of-the-art in fast text-to-image synthesis - in terms of sample quality and speed.
[ { "created": "Mon, 23 Jan 2023 16:05:45 GMT", "version": "v1" } ]
2023-01-24
[ [ "Sauer", "Axel", "" ], [ "Karras", "Tero", "" ], [ "Laine", "Samuli", "" ], [ "Geiger", "Andreas", "" ], [ "Aila", "Timo", "" ] ]
Text-to-image synthesis has recently seen significant progress thanks to large pretrained language models, large-scale training data, and the introduction of scalable model families such as diffusion and autoregressive models. However, the best-performing models require iterative evaluation to generate a single sample. In contrast, generative adversarial networks (GANs) only need a single forward pass. They are thus much faster, but they currently remain far behind the state-of-the-art in large-scale text-to-image synthesis. This paper aims to identify the necessary steps to regain competitiveness. Our proposed model, StyleGAN-T, addresses the specific requirements of large-scale text-to-image synthesis, such as large capacity, stable training on diverse datasets, strong text alignment, and controllable variation vs. text alignment tradeoff. StyleGAN-T significantly improves over previous GANs and outperforms distilled diffusion models - the previous state-of-the-art in fast text-to-image synthesis - in terms of sample quality and speed.
2305.01962
Luisa Herrmann
Luisa Herrmann and Vincent Peth and Sebastian Rudolph
Decidable (Ac)counting with Parikh and Muller: Adding Presburger Arithmetic to Monadic Second-Order Logic over Tree-Interpretable Structures
extended version, accepted at CSL 2024
null
null
null
cs.LO cs.FL
http://creativecommons.org/licenses/by/4.0/
We propose $\omega$MSO$\Join$BAPA, an expressive logic for describing countable structures, which subsumes and transcends both Counting Monadic Second-Order Logic (CMSO) and Boolean Algebra with Presburger Arithmetic (BAPA). We show that satisfiability of $\omega$MSO$\Join$BAPA is decidable over the class of labeled infinite binary trees, whereas it becomes undecidable even for a rather mild relaxations. The decidability result is established by an elaborate multi-step transformation into a particular normal form, followed by the deployment of Parikh-Muller Tree Automata, a novel kind of automaton for infinite labeled binary trees, integrating and generalizing both Muller and Parikh automata while still exhibiting a decidable (in fact PSpace-complete) emptiness problem. By means of MSO-interpretations, we lift the decidability result to all tree-interpretable classes of structures, including the classes of finite/countable structures of bounded treewidth/cliquewidth/partitionwidth. We generalize the result further by showing that decidability is even preserved when coupling width-restricted $\omega$MSO$\Join$BAPA with width-unrestricted two-variable logic with advanced counting. A final showcase demonstrates how our results can be leveraged to harvest decidability results for expressive $\mu$-calculi extended by global Presburger constraints.
[ { "created": "Wed, 3 May 2023 08:23:34 GMT", "version": "v1" }, { "created": "Thu, 23 Nov 2023 13:40:54 GMT", "version": "v2" } ]
2023-11-27
[ [ "Herrmann", "Luisa", "" ], [ "Peth", "Vincent", "" ], [ "Rudolph", "Sebastian", "" ] ]
We propose $\omega$MSO$\Join$BAPA, an expressive logic for describing countable structures, which subsumes and transcends both Counting Monadic Second-Order Logic (CMSO) and Boolean Algebra with Presburger Arithmetic (BAPA). We show that satisfiability of $\omega$MSO$\Join$BAPA is decidable over the class of labeled infinite binary trees, whereas it becomes undecidable even for a rather mild relaxations. The decidability result is established by an elaborate multi-step transformation into a particular normal form, followed by the deployment of Parikh-Muller Tree Automata, a novel kind of automaton for infinite labeled binary trees, integrating and generalizing both Muller and Parikh automata while still exhibiting a decidable (in fact PSpace-complete) emptiness problem. By means of MSO-interpretations, we lift the decidability result to all tree-interpretable classes of structures, including the classes of finite/countable structures of bounded treewidth/cliquewidth/partitionwidth. We generalize the result further by showing that decidability is even preserved when coupling width-restricted $\omega$MSO$\Join$BAPA with width-unrestricted two-variable logic with advanced counting. A final showcase demonstrates how our results can be leveraged to harvest decidability results for expressive $\mu$-calculi extended by global Presburger constraints.
1608.04236
Andrew Brock
Andrew Brock, Theodore Lim, J.M. Ritchie, Nick Weston
Generative and Discriminative Voxel Modeling with Convolutional Neural Networks
9 pages, 5 figures, 2 tables
null
null
null
cs.CV cs.HC cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
When working with three-dimensional data, choice of representation is key. We explore voxel-based models, and present evidence for the viability of voxellated representations in applications including shape modeling and object classification. Our key contributions are methods for training voxel-based variational autoencoders, a user interface for exploring the latent space learned by the autoencoder, and a deep convolutional neural network architecture for object classification. We address challenges unique to voxel-based representations, and empirically evaluate our models on the ModelNet benchmark, where we demonstrate a 51.5% relative improvement in the state of the art for object classification.
[ { "created": "Mon, 15 Aug 2016 11:14:35 GMT", "version": "v1" }, { "created": "Tue, 16 Aug 2016 08:06:24 GMT", "version": "v2" } ]
2016-08-17
[ [ "Brock", "Andrew", "" ], [ "Lim", "Theodore", "" ], [ "Ritchie", "J. M.", "" ], [ "Weston", "Nick", "" ] ]
When working with three-dimensional data, choice of representation is key. We explore voxel-based models, and present evidence for the viability of voxellated representations in applications including shape modeling and object classification. Our key contributions are methods for training voxel-based variational autoencoders, a user interface for exploring the latent space learned by the autoencoder, and a deep convolutional neural network architecture for object classification. We address challenges unique to voxel-based representations, and empirically evaluate our models on the ModelNet benchmark, where we demonstrate a 51.5% relative improvement in the state of the art for object classification.
2407.13469
Abderrahmane Issam
Abderrahmane Issam and Yusuf Can Semerci and Jan Scholtes and Gerasimos Spanakis
Fixed and Adaptive Simultaneous Machine Translation Strategies Using Adapters
Accepted at IWSLT 2024
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Simultaneous machine translation aims at solving the task of real-time translation by starting to translate before consuming the full input, which poses challenges in terms of balancing quality and latency of the translation. The wait-$k$ policy offers a solution by starting to translate after consuming $k$ words, where the choice of the number $k$ directly affects the latency and quality. In applications where we seek to keep the choice over latency and quality at inference, the wait-$k$ policy obliges us to train more than one model. In this paper, we address the challenge of building one model that can fulfil multiple latency levels and we achieve this by introducing lightweight adapter modules into the decoder. The adapters are trained to be specialized for different wait-$k$ values and compared to other techniques they offer more flexibility to allow for reaping the benefits of parameter sharing and minimizing interference. Additionally, we show that by combining with an adaptive strategy, we can further improve the results. Experiments on two language directions show that our method outperforms or competes with other strong baselines on most latency values.
[ { "created": "Thu, 18 Jul 2024 12:42:45 GMT", "version": "v1" } ]
2024-07-19
[ [ "Issam", "Abderrahmane", "" ], [ "Semerci", "Yusuf Can", "" ], [ "Scholtes", "Jan", "" ], [ "Spanakis", "Gerasimos", "" ] ]
Simultaneous machine translation aims at solving the task of real-time translation by starting to translate before consuming the full input, which poses challenges in terms of balancing quality and latency of the translation. The wait-$k$ policy offers a solution by starting to translate after consuming $k$ words, where the choice of the number $k$ directly affects the latency and quality. In applications where we seek to keep the choice over latency and quality at inference, the wait-$k$ policy obliges us to train more than one model. In this paper, we address the challenge of building one model that can fulfil multiple latency levels and we achieve this by introducing lightweight adapter modules into the decoder. The adapters are trained to be specialized for different wait-$k$ values and compared to other techniques they offer more flexibility to allow for reaping the benefits of parameter sharing and minimizing interference. Additionally, we show that by combining with an adaptive strategy, we can further improve the results. Experiments on two language directions show that our method outperforms or competes with other strong baselines on most latency values.
2008.04582
Xinzhu Ma
Xinzhu Ma, Shinan Liu, Zhiyi Xia, Hongwen Zhang, Xingyu Zeng and Wanli Ouyang
Rethinking Pseudo-LiDAR Representation
ECCV2020. Supplemental Material attached
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The recently proposed pseudo-LiDAR based 3D detectors greatly improve the benchmark of monocular/stereo 3D detection task. However, the underlying mechanism remains obscure to the research community. In this paper, we perform an in-depth investigation and observe that the efficacy of pseudo-LiDAR representation comes from the coordinate transformation, instead of data representation itself. Based on this observation, we design an image based CNN detector named Patch-Net, which is more generalized and can be instantiated as pseudo-LiDAR based 3D detectors. Moreover, the pseudo-LiDAR data in our PatchNet is organized as the image representation, which means existing 2D CNN designs can be easily utilized for extracting deep features from input data and boosting 3D detection performance. We conduct extensive experiments on the challenging KITTI dataset, where the proposed PatchNet outperforms all existing pseudo-LiDAR based counterparts. Code has been made available at: https://github.com/xinzhuma/patchnet.
[ { "created": "Tue, 11 Aug 2020 08:44:18 GMT", "version": "v1" } ]
2020-08-12
[ [ "Ma", "Xinzhu", "" ], [ "Liu", "Shinan", "" ], [ "Xia", "Zhiyi", "" ], [ "Zhang", "Hongwen", "" ], [ "Zeng", "Xingyu", "" ], [ "Ouyang", "Wanli", "" ] ]
The recently proposed pseudo-LiDAR based 3D detectors greatly improve the benchmark of monocular/stereo 3D detection task. However, the underlying mechanism remains obscure to the research community. In this paper, we perform an in-depth investigation and observe that the efficacy of pseudo-LiDAR representation comes from the coordinate transformation, instead of data representation itself. Based on this observation, we design an image based CNN detector named Patch-Net, which is more generalized and can be instantiated as pseudo-LiDAR based 3D detectors. Moreover, the pseudo-LiDAR data in our PatchNet is organized as the image representation, which means existing 2D CNN designs can be easily utilized for extracting deep features from input data and boosting 3D detection performance. We conduct extensive experiments on the challenging KITTI dataset, where the proposed PatchNet outperforms all existing pseudo-LiDAR based counterparts. Code has been made available at: https://github.com/xinzhuma/patchnet.
2401.14303
Liliana Cojocaru
Liliana Cojocaru
On Some Complexity Results for Even Linear Languages
16 pages, no figure. arXiv admin note: substantial text overlap with arXiv:1512.09207
null
null
null
cs.FL cs.CC cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We deal with a normal form for context-free grammars, called Dyck normal form. This normal form is a syntactical restriction of the Chomsky normal form, in which the two nonterminals occurring on the right-hand side of a rule are paired nonterminals. This pairwise property, along with several other terminal rewriting conditions, makes it possible to define a homomorphism from Dyck words to words generated by a grammar in Dyck normal form. We prove that for each context-free language L, there exist an integer K and a homomorphism phi such that L=phi(D'_K), where D'_K is a subset of D_K and D_K is the one-sided Dyck language over K letters. As an application we give an alternative proof of the inclusion of the class of even linear languages in AC1.
[ { "created": "Thu, 25 Jan 2024 16:50:57 GMT", "version": "v1" } ]
2024-01-26
[ [ "Cojocaru", "Liliana", "" ] ]
We deal with a normal form for context-free grammars, called Dyck normal form. This normal form is a syntactical restriction of the Chomsky normal form, in which the two nonterminals occurring on the right-hand side of a rule are paired nonterminals. This pairwise property, along with several other terminal rewriting conditions, makes it possible to define a homomorphism from Dyck words to words generated by a grammar in Dyck normal form. We prove that for each context-free language L, there exist an integer K and a homomorphism phi such that L=phi(D'_K), where D'_K is a subset of D_K and D_K is the one-sided Dyck language over K letters. As an application we give an alternative proof of the inclusion of the class of even linear languages in AC1.
2005.00196
EPTCS
Niels Voorneveld
From Equations to Distinctions: Two Interpretations of Effectful Computations
In Proceedings MSFP 2020, arXiv:2004.14735
EPTCS 317, 2020, pp. 1-17
10.4204/EPTCS.317.1
null
cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
There are several ways to define program equivalence for functional programs with algebraic effects. We consider two complementing ways to specify behavioural equivalence. One way is to specify a set of axiomatic equations, and allow proof methods to show that two programs are equivalent. Another way is to specify an Eilenberg-Moore algebra, which generate tests that could distinguish programs. These two methods are said to complement each other if any two programs can be shown to be equivalent if and only if there is no test to distinguish them. In this paper, we study a generic method to formulate from a set of axiomatic equations an Eilenberg-Moore algebra which complements it. We will look at an additional condition which must be satisfied for this to work. We then apply this method to a handful of examples of effects, including probability and global store, and show they coincide with the usual algebras from the literature. We will moreover study whether or not it is possible to specify a set of unary Boolean modalities which could function as distinction-tests complementing the equational theory.
[ { "created": "Fri, 1 May 2020 03:41:39 GMT", "version": "v1" } ]
2020-05-04
[ [ "Voorneveld", "Niels", "" ] ]
There are several ways to define program equivalence for functional programs with algebraic effects. We consider two complementing ways to specify behavioural equivalence. One way is to specify a set of axiomatic equations, and allow proof methods to show that two programs are equivalent. Another way is to specify an Eilenberg-Moore algebra, which generate tests that could distinguish programs. These two methods are said to complement each other if any two programs can be shown to be equivalent if and only if there is no test to distinguish them. In this paper, we study a generic method to formulate from a set of axiomatic equations an Eilenberg-Moore algebra which complements it. We will look at an additional condition which must be satisfied for this to work. We then apply this method to a handful of examples of effects, including probability and global store, and show they coincide with the usual algebras from the literature. We will moreover study whether or not it is possible to specify a set of unary Boolean modalities which could function as distinction-tests complementing the equational theory.
2405.15336
Matthias Hoffmann
Matthias K. Hoffmann, Julian M\"uhlenhoff, Zhaoheng Ding, Thomas Sattel, Kathrin Fla{\ss}kamp
An iterative closest point algorithm for marker-free 3D shape registration of continuum robots
11 pages, 8 figures, 2 algorithms, journal
null
null
null
cs.RO eess.IV
http://creativecommons.org/licenses/by-nc-nd/4.0/
Continuum robots have emerged as a promising technology in the medical field due to their potential of accessing deep sited locations of the human body with low surgical trauma. When deriving physics-based models for these robots, evaluating the models poses a significant challenge due to the difficulty in accurately measuring their intricate shapes. In this work, we present an optimization based 3D shape registration algorithm for estimation of the backbone shape of slender continuum robots as part of a pho togrammetric measurement. Our approach to estimating the backbones optimally matches a parametric three-dimensional curve to images of the robot. Since we incorporate an iterative closest point algorithm into our method, we do not need prior knowledge of the robots position within the respective images. In our experiments with artificial and real images of a concentric tube continuum robot, we found an average maximum deviation of the reconstruction from simulation data of 0.665 mm and 0.939 mm from manual measurements. These results show that our algorithm is well capable of producing high accuracy positional data from images of continuum robots.
[ { "created": "Fri, 24 May 2024 08:17:40 GMT", "version": "v1" } ]
2024-05-27
[ [ "Hoffmann", "Matthias K.", "" ], [ "Mühlenhoff", "Julian", "" ], [ "Ding", "Zhaoheng", "" ], [ "Sattel", "Thomas", "" ], [ "Flaßkamp", "Kathrin", "" ] ]
Continuum robots have emerged as a promising technology in the medical field due to their potential of accessing deep sited locations of the human body with low surgical trauma. When deriving physics-based models for these robots, evaluating the models poses a significant challenge due to the difficulty in accurately measuring their intricate shapes. In this work, we present an optimization based 3D shape registration algorithm for estimation of the backbone shape of slender continuum robots as part of a pho togrammetric measurement. Our approach to estimating the backbones optimally matches a parametric three-dimensional curve to images of the robot. Since we incorporate an iterative closest point algorithm into our method, we do not need prior knowledge of the robots position within the respective images. In our experiments with artificial and real images of a concentric tube continuum robot, we found an average maximum deviation of the reconstruction from simulation data of 0.665 mm and 0.939 mm from manual measurements. These results show that our algorithm is well capable of producing high accuracy positional data from images of continuum robots.
1610.08914
Lucas Dixon
Ellery Wulczyn, Nithum Thain, Lucas Dixon
Ex Machina: Personal Attacks Seen at Scale
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
The damage personal attacks cause to online discourse motivates many platforms to try to curb the phenomenon. However, understanding the prevalence and impact of personal attacks in online platforms at scale remains surprisingly difficult. The contribution of this paper is to develop and illustrate a method that combines crowdsourcing and machine learning to analyze personal attacks at scale. We show an evaluation method for a classifier in terms of the aggregated number of crowd-workers it can approximate. We apply our methodology to English Wikipedia, generating a corpus of over 100k high quality human-labeled comments and 63M machine-labeled ones from a classifier that is as good as the aggregate of 3 crowd-workers, as measured by the area under the ROC curve and Spearman correlation. Using this corpus of machine-labeled scores, our methodology allows us to explore some of the open questions about the nature of online personal attacks. This reveals that the majority of personal attacks on Wikipedia are not the result of a few malicious users, nor primarily the consequence of allowing anonymous contributions from unregistered users.
[ { "created": "Thu, 27 Oct 2016 18:18:18 GMT", "version": "v1" }, { "created": "Sat, 25 Feb 2017 18:38:16 GMT", "version": "v2" } ]
2017-02-28
[ [ "Wulczyn", "Ellery", "" ], [ "Thain", "Nithum", "" ], [ "Dixon", "Lucas", "" ] ]
The damage personal attacks cause to online discourse motivates many platforms to try to curb the phenomenon. However, understanding the prevalence and impact of personal attacks in online platforms at scale remains surprisingly difficult. The contribution of this paper is to develop and illustrate a method that combines crowdsourcing and machine learning to analyze personal attacks at scale. We show an evaluation method for a classifier in terms of the aggregated number of crowd-workers it can approximate. We apply our methodology to English Wikipedia, generating a corpus of over 100k high quality human-labeled comments and 63M machine-labeled ones from a classifier that is as good as the aggregate of 3 crowd-workers, as measured by the area under the ROC curve and Spearman correlation. Using this corpus of machine-labeled scores, our methodology allows us to explore some of the open questions about the nature of online personal attacks. This reveals that the majority of personal attacks on Wikipedia are not the result of a few malicious users, nor primarily the consequence of allowing anonymous contributions from unregistered users.
1104.0199
Garth Wells
Kristian B. {\O}lgaard and Garth N. Wells
Optimisations for quadrature representations of finite element tensors through automated code generation
null
ACM Trans. Math. Softw. 37, 1, Article 8 (January 2010), 23 pages
10.1145/1644001.1644009
null
cs.MS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We examine aspects of the computation of finite element matrices and vectors which are made possible by automated code generation. Given a variational form in a syntax which resembles standard mathematical notation, the low-level computer code for building finite element tensors, typically matrices, vectors and scalars, can be generated automatically via a form compiler. In particular, the generation of code for computing finite element matrices using a quadrature approach is addressed. For quadrature representations, a number of optimisation strategies which are made possible by automated code generation are presented. The relative performance of two different automatically generated representations of finite element matrices is examined, with a particular emphasis on complicated variational forms. It is shown that approaches which perform best for simple forms are not tractable for more complicated problems in terms of run time performance, the time required to generate the code or the size of the generated code. The approach and optimisations elaborated here are effective for a range of variational forms.
[ { "created": "Fri, 1 Apr 2011 15:29:05 GMT", "version": "v1" } ]
2011-04-04
[ [ "Ølgaard", "Kristian B.", "" ], [ "Wells", "Garth N.", "" ] ]
We examine aspects of the computation of finite element matrices and vectors which are made possible by automated code generation. Given a variational form in a syntax which resembles standard mathematical notation, the low-level computer code for building finite element tensors, typically matrices, vectors and scalars, can be generated automatically via a form compiler. In particular, the generation of code for computing finite element matrices using a quadrature approach is addressed. For quadrature representations, a number of optimisation strategies which are made possible by automated code generation are presented. The relative performance of two different automatically generated representations of finite element matrices is examined, with a particular emphasis on complicated variational forms. It is shown that approaches which perform best for simple forms are not tractable for more complicated problems in terms of run time performance, the time required to generate the code or the size of the generated code. The approach and optimisations elaborated here are effective for a range of variational forms.
0704.2544
Vishwambhar Rathi
Vishwambhar Rathi, Ruediger Urbanke
Existence Proofs of Some EXIT Like Functions
To appear in proc. of ISIT 2007
null
null
null
cs.IT math.IT
null
The Extended BP (EBP) Generalized EXIT (GEXIT) function introduced in \cite{MMRU05} plays a fundamental role in the asymptotic analysis of sparse graph codes. For transmission over the binary erasure channel (BEC) the analytic properties of the EBP GEXIT function are relatively simple and well understood. The general case is much harder and even the existence of the curve is not known in general. We introduce some tools from non-linear analysis which can be useful to prove the existence of EXIT like curves in some cases. The main tool is the Krasnoselskii-Rabinowitz (KR) bifurcation theorem.
[ { "created": "Thu, 19 Apr 2007 14:36:43 GMT", "version": "v1" } ]
2007-07-13
[ [ "Rathi", "Vishwambhar", "" ], [ "Urbanke", "Ruediger", "" ] ]
The Extended BP (EBP) Generalized EXIT (GEXIT) function introduced in \cite{MMRU05} plays a fundamental role in the asymptotic analysis of sparse graph codes. For transmission over the binary erasure channel (BEC) the analytic properties of the EBP GEXIT function are relatively simple and well understood. The general case is much harder and even the existence of the curve is not known in general. We introduce some tools from non-linear analysis which can be useful to prove the existence of EXIT like curves in some cases. The main tool is the Krasnoselskii-Rabinowitz (KR) bifurcation theorem.
1506.02876
Akram Hakiri
Akram Hakiri (LAAS), Pascal Berthou (UPS, LAAS)
Leveraging SDN for The 5G Networks: Trends, Prospects and Challenges
appears in Software Defined Mobile Networks : Beyond LTE Network Architecture, Wiley Series in Communications Networking \& Distributed Systems 2015, Mobile \& Wireless Communications, 978-1-118-90028-4
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Today 4G mobile systems are evolving to provide IP connectivity for diverse applications and services up to 1Gbps. They are designed to optimize the network performance, improve cost efficiency and facilitate the uptake of mass market IP-based services. Nevertheless, the growing demand and the diverse patterns of mobile traffic place an increasing strain on cellular networks. To cater to the large volumes of traffic delivered by the new services and applications, the future 5G network will provide the fundamental infrastructure for billions of new devices with less predictable traffic patterns will join the network. The 5G technology is presently in its early research stages, so researches are currently underway exploring different architectural paths to address their key drivers. SDN techniques have been seen as promising enablers for this vision of carrier networks, which will likely play a crucial role in the design of 5G wireless networks. A critical understanding of this emerging paradigm is necessary to address the multiple challenges of the future SDN-enabled 5G technology. To address this requirement, a survey the emerging trends and prospects, followed by in-depth discussion of major challenges in this area are discussed.
[ { "created": "Tue, 9 Jun 2015 12:10:51 GMT", "version": "v1" } ]
2015-06-10
[ [ "Hakiri", "Akram", "", "LAAS" ], [ "Berthou", "Pascal", "", "UPS, LAAS" ] ]
Today 4G mobile systems are evolving to provide IP connectivity for diverse applications and services up to 1Gbps. They are designed to optimize the network performance, improve cost efficiency and facilitate the uptake of mass market IP-based services. Nevertheless, the growing demand and the diverse patterns of mobile traffic place an increasing strain on cellular networks. To cater to the large volumes of traffic delivered by the new services and applications, the future 5G network will provide the fundamental infrastructure for billions of new devices with less predictable traffic patterns will join the network. The 5G technology is presently in its early research stages, so researches are currently underway exploring different architectural paths to address their key drivers. SDN techniques have been seen as promising enablers for this vision of carrier networks, which will likely play a crucial role in the design of 5G wireless networks. A critical understanding of this emerging paradigm is necessary to address the multiple challenges of the future SDN-enabled 5G technology. To address this requirement, a survey the emerging trends and prospects, followed by in-depth discussion of major challenges in this area are discussed.
1712.09775
Uche Nnolim
Uche A. Nnolim
Sky detection and log illumination refinement for PDE-based hazy image contrast enhancement
22 pages, 13 figures, 5 tables
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This report presents the results of a sky detection technique used to improve the performance of a previously developed partial differential equation (PDE)-based hazy image enhancement algorithm. Additionally, a proposed alternative method utilizes a function for log illumination refinement to improve de-hazing results while avoiding over-enhancement of sky or homogeneous regions. The algorithms were tested with several benchmark and calibration images and compared with several standard algorithms from the literature. Results indicate that the algorithms yield mostly consistent results and surpasses several of the other algorithms in terms of colour and contrast enhancement in addition to improved edge visibility.
[ { "created": "Thu, 28 Dec 2017 06:30:26 GMT", "version": "v1" }, { "created": "Sat, 10 Mar 2018 08:56:18 GMT", "version": "v2" } ]
2018-03-13
[ [ "Nnolim", "Uche A.", "" ] ]
This report presents the results of a sky detection technique used to improve the performance of a previously developed partial differential equation (PDE)-based hazy image enhancement algorithm. Additionally, a proposed alternative method utilizes a function for log illumination refinement to improve de-hazing results while avoiding over-enhancement of sky or homogeneous regions. The algorithms were tested with several benchmark and calibration images and compared with several standard algorithms from the literature. Results indicate that the algorithms yield mostly consistent results and surpasses several of the other algorithms in terms of colour and contrast enhancement in addition to improved edge visibility.
2103.10847
Danny Weyns
Danny Weyns, Bradley Schmerl, Masako Kishida, Alberto Leva, Marin Litoiu, Necmiye Ozay, Colin Paterson, Kenji Tei
Towards Better Adaptive Systems by Combining MAPE, Control Theory, and Machine Learning
7 pages
null
null
null
cs.SE cs.LG cs.SY eess.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Two established approaches to engineer adaptive systems are architecture-based adaptation that uses a Monitor-Analysis-Planning-Executing (MAPE) loop that reasons over architectural models (aka Knowledge) to make adaptation decisions, and control-based adaptation that relies on principles of control theory (CT) to realize adaptation. Recently, we also observe a rapidly growing interest in applying machine learning (ML) to support different adaptation mechanisms. While MAPE and CT have particular characteristics and strengths to be applied independently, in this paper, we are concerned with the question of how these approaches are related with one another and whether combining them and supporting them with ML can produce better adaptive systems. We motivate the combined use of different adaptation approaches using a scenario of a cloud-based enterprise system and illustrate the analysis when combining the different approaches. To conclude, we offer a set of open questions for further research in this interesting area.
[ { "created": "Fri, 19 Mar 2021 15:00:08 GMT", "version": "v1" } ]
2021-03-22
[ [ "Weyns", "Danny", "" ], [ "Schmerl", "Bradley", "" ], [ "Kishida", "Masako", "" ], [ "Leva", "Alberto", "" ], [ "Litoiu", "Marin", "" ], [ "Ozay", "Necmiye", "" ], [ "Paterson", "Colin", "" ], [ "Tei", "Kenji", "" ] ]
Two established approaches to engineer adaptive systems are architecture-based adaptation that uses a Monitor-Analysis-Planning-Executing (MAPE) loop that reasons over architectural models (aka Knowledge) to make adaptation decisions, and control-based adaptation that relies on principles of control theory (CT) to realize adaptation. Recently, we also observe a rapidly growing interest in applying machine learning (ML) to support different adaptation mechanisms. While MAPE and CT have particular characteristics and strengths to be applied independently, in this paper, we are concerned with the question of how these approaches are related with one another and whether combining them and supporting them with ML can produce better adaptive systems. We motivate the combined use of different adaptation approaches using a scenario of a cloud-based enterprise system and illustrate the analysis when combining the different approaches. To conclude, we offer a set of open questions for further research in this interesting area.
1905.11519
Kush Varshney
Kush R. Varshney and Aleksandra Mojsilovic
Open Platforms for Artificial Intelligence for Social Good: Common Patterns as a Pathway to True Impact
appearing at the 2019 ICML AI for Social Good Workshop
null
null
null
cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The AI for social good movement has now reached a state in which a large number of one-off demonstrations have illustrated that partnerships of AI practitioners and social change organizations are possible and can address problems faced in sustainable development. In this paper, we discuss how moving from demonstrations to true impact on humanity will require a different course of action, namely open platforms containing foundational AI capabilities to support common needs of multiple organizations working in similar topical areas. We lend credence to this proposal by describing three example patterns of social good problems and their AI-based solutions: natural language processing for making sense of international development reports, causal inference for providing guidance to vulnerable individuals, and discrimination-aware classification for supporting unbiased allocation decisions. We argue that the development of such platforms will be possible through convenings of social change organizations, AI companies, and grantmaking foundations.
[ { "created": "Mon, 27 May 2019 21:42:56 GMT", "version": "v1" } ]
2019-05-29
[ [ "Varshney", "Kush R.", "" ], [ "Mojsilovic", "Aleksandra", "" ] ]
The AI for social good movement has now reached a state in which a large number of one-off demonstrations have illustrated that partnerships of AI practitioners and social change organizations are possible and can address problems faced in sustainable development. In this paper, we discuss how moving from demonstrations to true impact on humanity will require a different course of action, namely open platforms containing foundational AI capabilities to support common needs of multiple organizations working in similar topical areas. We lend credence to this proposal by describing three example patterns of social good problems and their AI-based solutions: natural language processing for making sense of international development reports, causal inference for providing guidance to vulnerable individuals, and discrimination-aware classification for supporting unbiased allocation decisions. We argue that the development of such platforms will be possible through convenings of social change organizations, AI companies, and grantmaking foundations.
2403.03082
Haneol Kang
Haneol Kang, Dong-Wan Choi
Recall-Oriented Continual Learning with Generative Adversarial Meta-Model
Accepted in AAAI-2024 (Oral presentation)
null
null
null
cs.LG cs.AI cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The stability-plasticity dilemma is a major challenge in continual learning, as it involves balancing the conflicting objectives of maintaining performance on previous tasks while learning new tasks. In this paper, we propose the recall-oriented continual learning framework to address this challenge. Inspired by the human brain's ability to separate the mechanisms responsible for stability and plasticity, our framework consists of a two-level architecture where an inference network effectively acquires new knowledge and a generative network recalls past knowledge when necessary. In particular, to maximize the stability of past knowledge, we investigate the complexity of knowledge depending on different representations, and thereby introducing generative adversarial meta-model (GAMM) that incrementally learns task-specific parameters instead of input data samples of the task. Through our experiments, we show that our framework not only effectively learns new knowledge without any disruption but also achieves high stability of previous knowledge in both task-aware and task-agnostic learning scenarios. Our code is available at: https://github.com/bigdata-inha/recall-oriented-cl-framework.
[ { "created": "Tue, 5 Mar 2024 16:08:59 GMT", "version": "v1" } ]
2024-03-06
[ [ "Kang", "Haneol", "" ], [ "Choi", "Dong-Wan", "" ] ]
The stability-plasticity dilemma is a major challenge in continual learning, as it involves balancing the conflicting objectives of maintaining performance on previous tasks while learning new tasks. In this paper, we propose the recall-oriented continual learning framework to address this challenge. Inspired by the human brain's ability to separate the mechanisms responsible for stability and plasticity, our framework consists of a two-level architecture where an inference network effectively acquires new knowledge and a generative network recalls past knowledge when necessary. In particular, to maximize the stability of past knowledge, we investigate the complexity of knowledge depending on different representations, and thereby introducing generative adversarial meta-model (GAMM) that incrementally learns task-specific parameters instead of input data samples of the task. Through our experiments, we show that our framework not only effectively learns new knowledge without any disruption but also achieves high stability of previous knowledge in both task-aware and task-agnostic learning scenarios. Our code is available at: https://github.com/bigdata-inha/recall-oriented-cl-framework.
2110.07184
Eun Sun Lee
Eun Sun Lee, Junho Kim, and Young Min Kim
Self-Supervised Domain Adaptation for Visual Navigation with Global Map Consistency
Accepted to WACV 2022
null
null
null
cs.CV cs.RO
http://creativecommons.org/licenses/by-nc-nd/4.0/
We propose a light-weight, self-supervised adaptation for a visual navigation agent to generalize to unseen environment. Given an embodied agent trained in a noiseless environment, our objective is to transfer the agent to a noisy environment where actuation and odometry sensor noise is present. Our method encourages the agent to maximize the consistency between the global maps generated at different time steps in a round-trip trajectory. The proposed task is completely self-supervised, not requiring any supervision from ground-truth pose data or explicit noise model. In addition, optimization of the task objective is extremely light-weight, as training terminates within a few minutes on a commodity GPU. Our experiments show that the proposed task helps the agent to successfully transfer to new, noisy environments. The transferred agent exhibits improved localization and mapping accuracy, further leading to enhanced performance in downstream visual navigation tasks. Moreover, we demonstrate test-time adaptation with our self-supervised task to show its potential applicability in real-world deployment.
[ { "created": "Thu, 14 Oct 2021 07:14:36 GMT", "version": "v1" } ]
2021-10-15
[ [ "Lee", "Eun Sun", "" ], [ "Kim", "Junho", "" ], [ "Kim", "Young Min", "" ] ]
We propose a light-weight, self-supervised adaptation for a visual navigation agent to generalize to unseen environment. Given an embodied agent trained in a noiseless environment, our objective is to transfer the agent to a noisy environment where actuation and odometry sensor noise is present. Our method encourages the agent to maximize the consistency between the global maps generated at different time steps in a round-trip trajectory. The proposed task is completely self-supervised, not requiring any supervision from ground-truth pose data or explicit noise model. In addition, optimization of the task objective is extremely light-weight, as training terminates within a few minutes on a commodity GPU. Our experiments show that the proposed task helps the agent to successfully transfer to new, noisy environments. The transferred agent exhibits improved localization and mapping accuracy, further leading to enhanced performance in downstream visual navigation tasks. Moreover, we demonstrate test-time adaptation with our self-supervised task to show its potential applicability in real-world deployment.
1806.08037
Qun Liu
Manohar Karki, Qun Liu, Robert DiBiano, Saikat Basu, Supratik Mukhopadhyay
Pixel-level Reconstruction and Classification for Noisy Handwritten Bangla Characters
Paper was accepted at the 16th International Conference on Frontiers in Handwriting Recognition (ICFHR 2018)
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Classification techniques for images of handwritten characters are susceptible to noise. Quadtrees can be an efficient representation for learning from sparse features. In this paper, we improve the effectiveness of probabilistic quadtrees by using a pixel level classifier to extract the character pixels and remove noise from handwritten character images. The pixel level denoiser (a deep belief network) uses the map responses obtained from a pretrained CNN as features for reconstructing the characters eliminating noise. We experimentally demonstrate the effectiveness of our approach by reconstructing and classifying a noisy version of handwritten Bangla Numeral and Basic Character datasets.
[ { "created": "Thu, 21 Jun 2018 01:30:30 GMT", "version": "v1" } ]
2018-06-22
[ [ "Karki", "Manohar", "" ], [ "Liu", "Qun", "" ], [ "DiBiano", "Robert", "" ], [ "Basu", "Saikat", "" ], [ "Mukhopadhyay", "Supratik", "" ] ]
Classification techniques for images of handwritten characters are susceptible to noise. Quadtrees can be an efficient representation for learning from sparse features. In this paper, we improve the effectiveness of probabilistic quadtrees by using a pixel level classifier to extract the character pixels and remove noise from handwritten character images. The pixel level denoiser (a deep belief network) uses the map responses obtained from a pretrained CNN as features for reconstructing the characters eliminating noise. We experimentally demonstrate the effectiveness of our approach by reconstructing and classifying a noisy version of handwritten Bangla Numeral and Basic Character datasets.
2403.00582
Nicolas Scharowski
Nicolas Scharowski, Sebastian A. C. Perrig, Lena Fanya Aeschbach, Nick von Felten, Klaus Opwis, Philipp Wintersberger, and Florian Br\"uhlmann
To Trust or Distrust Trust Measures: Validating Questionnaires for Trust in AI
null
null
null
null
cs.HC
http://creativecommons.org/licenses/by/4.0/
Despite the importance of trust in human-AI interactions, researchers must adopt questionnaires from other disciplines that lack validation in the AI context. Motivated by the need for reliable and valid measures, we investigated the psychometric quality of two trust questionnaires, the Trust between People and Automation scale (TPA) by Jian et al. (2000) and the Trust Scale for the AI Context (TAI) by Hoffman et al. (2023). In a pre-registered online experiment (N = 1485), participants observed interactions with trustworthy and untrustworthy AI (autonomous vehicle and chatbot). Results support the psychometric quality of the TAI while revealing opportunities to improve the TPA, which we outline in our recommendations for using the two questionnaires. Furthermore, our findings provide additional empirical evidence of trust and distrust as two distinct constructs that may coexist independently. Building on our findings, we highlight the opportunities and added value of measuring both trust and distrust in human-AI research and advocate for further work on both constructs.
[ { "created": "Fri, 1 Mar 2024 15:02:36 GMT", "version": "v1" } ]
2024-03-04
[ [ "Scharowski", "Nicolas", "" ], [ "Perrig", "Sebastian A. C.", "" ], [ "Aeschbach", "Lena Fanya", "" ], [ "von Felten", "Nick", "" ], [ "Opwis", "Klaus", "" ], [ "Wintersberger", "Philipp", "" ], [ "Brühlmann", "Florian", "" ] ]
Despite the importance of trust in human-AI interactions, researchers must adopt questionnaires from other disciplines that lack validation in the AI context. Motivated by the need for reliable and valid measures, we investigated the psychometric quality of two trust questionnaires, the Trust between People and Automation scale (TPA) by Jian et al. (2000) and the Trust Scale for the AI Context (TAI) by Hoffman et al. (2023). In a pre-registered online experiment (N = 1485), participants observed interactions with trustworthy and untrustworthy AI (autonomous vehicle and chatbot). Results support the psychometric quality of the TAI while revealing opportunities to improve the TPA, which we outline in our recommendations for using the two questionnaires. Furthermore, our findings provide additional empirical evidence of trust and distrust as two distinct constructs that may coexist independently. Building on our findings, we highlight the opportunities and added value of measuring both trust and distrust in human-AI research and advocate for further work on both constructs.
2305.01877
Daniel Hader
Daniel Hader and Matthew J. Patitz
The Impacts of Dimensionality, Diffusion, and Directedness on Intrinsic Cross-Model Simulation in Tile-Based Self-Assembly
To appear in the proceedings of ICALP 2023
null
null
null
cs.CG cs.ET
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Algorithmic self-assembly occurs when disorganized components autonomously combine to form structures and, by their design and the dynamics of the system, are forced to follow the execution of algorithms. Motivated by applications in DNA-nanotechnology, investigations in algorithmic tile-based self-assembly have blossomed into a mature theory with research leveraging tools from computability theory, complexity theory, information theory, and graph theory to develop a wide range of models and show that many are computationally universal, while also exposing powers and limitations of each. Beyond computational universality, the abstract Tile Assembly Model (aTAM) was shown to be intrinsically universal (IU), a strong notion of completeness where a single tile set is capable of simulating all systems within the model; however, this result required non-deterministic tile attachments. This was later confirmed necessary when it was shown that the class of directed aTAM systems is not IU. Building on these results to further investigate the impacts of other dynamics, Hader et al. examined several tile-assembly models which varied across (1) the numbers of dimensions used, (2) restrictions based on diffusion of tiles through space, and (3) whether each system is directed, and showed which models are IU. Such results have shed much light on the roles of various aspects of the dynamics of tile-assembly and their effects on the intrinsic universality of each model. Here we provide direct comparisons of the various models by considering intrinsic simulations between models. We show that in some cases one model is more powerful than another, and in others, pairs of models have mutually exclusive capabilities. This comparison helps to expose the impacts of these three important aspects and further helps define a hierarchy of tile-assembly models.
[ { "created": "Wed, 3 May 2023 03:38:13 GMT", "version": "v1" }, { "created": "Thu, 4 May 2023 13:55:40 GMT", "version": "v2" } ]
2023-05-05
[ [ "Hader", "Daniel", "" ], [ "Patitz", "Matthew J.", "" ] ]
Algorithmic self-assembly occurs when disorganized components autonomously combine to form structures and, by their design and the dynamics of the system, are forced to follow the execution of algorithms. Motivated by applications in DNA-nanotechnology, investigations in algorithmic tile-based self-assembly have blossomed into a mature theory with research leveraging tools from computability theory, complexity theory, information theory, and graph theory to develop a wide range of models and show that many are computationally universal, while also exposing powers and limitations of each. Beyond computational universality, the abstract Tile Assembly Model (aTAM) was shown to be intrinsically universal (IU), a strong notion of completeness where a single tile set is capable of simulating all systems within the model; however, this result required non-deterministic tile attachments. This was later confirmed necessary when it was shown that the class of directed aTAM systems is not IU. Building on these results to further investigate the impacts of other dynamics, Hader et al. examined several tile-assembly models which varied across (1) the numbers of dimensions used, (2) restrictions based on diffusion of tiles through space, and (3) whether each system is directed, and showed which models are IU. Such results have shed much light on the roles of various aspects of the dynamics of tile-assembly and their effects on the intrinsic universality of each model. Here we provide direct comparisons of the various models by considering intrinsic simulations between models. We show that in some cases one model is more powerful than another, and in others, pairs of models have mutually exclusive capabilities. This comparison helps to expose the impacts of these three important aspects and further helps define a hierarchy of tile-assembly models.
2310.12684
Naresh Kshetri
Naresh Kshetri, Vasudha, Denisa Hoxha
knowCC: Knowledge, awareness of computer & cyber ethics between CS/non-CS university students
7 pages, 2 figures
null
null
null
cs.CR
http://creativecommons.org/licenses/by/4.0/
Technology has advanced dramatically in the previous several years. There are also cyber assaults. Cyberattacks pose a possible danger to information security and the general public. Since data practice and internet consumption rates continue to upswing, cyber awareness has become progressively important. Furthermore, as businesses pace their digital transformation with mobile devices, cloud services, communal media, and Internet of Things services, cybersecurity has appeared as a critical issue in corporate risk management. This research focuses on the relations between cybersecurity awareness, cyber knowledge, computer ethics, cyber ethics, and cyber behavior, as well as protective tools, across university students in general. The findings express that while internet users are alert of cyber threats, they only take the most elementary and easy-to-implement precautions. Several knowledge and awareness have been proposed to knob the issue of cyber security. It also grants the principles of cybersecurity in terms of its structure, workforces, and evidence pertaining to the shield of personal information in the cyber world. The first step is for people to educate themselves about the negative aspects of the internet and to learn more about cyber threats so that they can notice when an attack is taking place. To validate the efficiency of the suggested analysis between CS and non-CS university students, case study along with several comparisons are provided.
[ { "created": "Thu, 19 Oct 2023 12:29:26 GMT", "version": "v1" } ]
2023-10-23
[ [ "Kshetri", "Naresh", "" ], [ "Vasudha", "", "" ], [ "Hoxha", "Denisa", "" ] ]
Technology has advanced dramatically in the previous several years. There are also cyber assaults. Cyberattacks pose a possible danger to information security and the general public. Since data practice and internet consumption rates continue to upswing, cyber awareness has become progressively important. Furthermore, as businesses pace their digital transformation with mobile devices, cloud services, communal media, and Internet of Things services, cybersecurity has appeared as a critical issue in corporate risk management. This research focuses on the relations between cybersecurity awareness, cyber knowledge, computer ethics, cyber ethics, and cyber behavior, as well as protective tools, across university students in general. The findings express that while internet users are alert of cyber threats, they only take the most elementary and easy-to-implement precautions. Several knowledge and awareness have been proposed to knob the issue of cyber security. It also grants the principles of cybersecurity in terms of its structure, workforces, and evidence pertaining to the shield of personal information in the cyber world. The first step is for people to educate themselves about the negative aspects of the internet and to learn more about cyber threats so that they can notice when an attack is taking place. To validate the efficiency of the suggested analysis between CS and non-CS university students, case study along with several comparisons are provided.
2004.03794
Kristjan Arumae
Kristjan Arumae and Parminder Bhatia
CALM: Continuous Adaptive Learning for Language Modeling
null
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Training large language representation models has become a standard in the natural language processing community. This allows for fine tuning on any number of specific tasks, however, these large high capacity models can continue to train on domain specific unlabeled data to make initialization even more robust for supervised tasks. We demonstrate that in practice these pre-trained models present performance deterioration in the form of catastrophic forgetting when evaluated on tasks from a general domain such as GLUE. In this work we propose CALM, Continuous Adaptive Learning for Language Modeling: techniques to render models which retain knowledge across multiple domains. With these methods, we are able to reduce the performance gap across supervised tasks introduced by task specific models which we demonstrate using a continual learning setting in biomedical and clinical domains.
[ { "created": "Wed, 8 Apr 2020 03:51:17 GMT", "version": "v1" } ]
2020-04-09
[ [ "Arumae", "Kristjan", "" ], [ "Bhatia", "Parminder", "" ] ]
Training large language representation models has become a standard in the natural language processing community. This allows for fine tuning on any number of specific tasks, however, these large high capacity models can continue to train on domain specific unlabeled data to make initialization even more robust for supervised tasks. We demonstrate that in practice these pre-trained models present performance deterioration in the form of catastrophic forgetting when evaluated on tasks from a general domain such as GLUE. In this work we propose CALM, Continuous Adaptive Learning for Language Modeling: techniques to render models which retain knowledge across multiple domains. With these methods, we are able to reduce the performance gap across supervised tasks introduced by task specific models which we demonstrate using a continual learning setting in biomedical and clinical domains.
2402.16979
Lucas Monteiro Paes
Juan Felipe Gomez and Caio Vieira Machado and Lucas Monteiro Paes and Flavio P. Calmon
Algorithmic Arbitrariness in Content Moderation
null
null
null
null
cs.CY cs.LG cs.SI
http://creativecommons.org/licenses/by/4.0/
Machine learning (ML) is widely used to moderate online content. Despite its scalability relative to human moderation, the use of ML introduces unique challenges to content moderation. One such challenge is predictive multiplicity: multiple competing models for content classification may perform equally well on average, yet assign conflicting predictions to the same content. This multiplicity can result from seemingly innocuous choices during model development, such as random seed selection for parameter initialization. We experimentally demonstrate how content moderation tools can arbitrarily classify samples as toxic, leading to arbitrary restrictions on speech. We discuss these findings in terms of human rights set out by the International Covenant on Civil and Political Rights (ICCPR), namely freedom of expression, non-discrimination, and procedural justice. We analyze (i) the extent of predictive multiplicity among state-of-the-art LLMs used for detecting toxic content; (ii) the disparate impact of this arbitrariness across social groups; and (iii) how model multiplicity compares to unambiguous human classifications. Our findings indicate that the up-scaled algorithmic moderation risks legitimizing an algorithmic leviathan, where an algorithm disproportionately manages human rights. To mitigate such risks, our study underscores the need to identify and increase the transparency of arbitrariness in content moderation applications. Since algorithmic content moderation is being fueled by pressing social concerns, such as disinformation and hate speech, our discussion on harms raises concerns relevant to policy debates. Our findings also contribute to content moderation and intermediary liability laws being discussed and passed in many countries, such as the Digital Services Act in the European Union, the Online Safety Act in the United Kingdom, and the Fake News Bill in Brazil.
[ { "created": "Mon, 26 Feb 2024 19:27:00 GMT", "version": "v1" } ]
2024-02-28
[ [ "Gomez", "Juan Felipe", "" ], [ "Machado", "Caio Vieira", "" ], [ "Paes", "Lucas Monteiro", "" ], [ "Calmon", "Flavio P.", "" ] ]
Machine learning (ML) is widely used to moderate online content. Despite its scalability relative to human moderation, the use of ML introduces unique challenges to content moderation. One such challenge is predictive multiplicity: multiple competing models for content classification may perform equally well on average, yet assign conflicting predictions to the same content. This multiplicity can result from seemingly innocuous choices during model development, such as random seed selection for parameter initialization. We experimentally demonstrate how content moderation tools can arbitrarily classify samples as toxic, leading to arbitrary restrictions on speech. We discuss these findings in terms of human rights set out by the International Covenant on Civil and Political Rights (ICCPR), namely freedom of expression, non-discrimination, and procedural justice. We analyze (i) the extent of predictive multiplicity among state-of-the-art LLMs used for detecting toxic content; (ii) the disparate impact of this arbitrariness across social groups; and (iii) how model multiplicity compares to unambiguous human classifications. Our findings indicate that the up-scaled algorithmic moderation risks legitimizing an algorithmic leviathan, where an algorithm disproportionately manages human rights. To mitigate such risks, our study underscores the need to identify and increase the transparency of arbitrariness in content moderation applications. Since algorithmic content moderation is being fueled by pressing social concerns, such as disinformation and hate speech, our discussion on harms raises concerns relevant to policy debates. Our findings also contribute to content moderation and intermediary liability laws being discussed and passed in many countries, such as the Digital Services Act in the European Union, the Online Safety Act in the United Kingdom, and the Fake News Bill in Brazil.
1302.4258
Fanny Yang
Fanny Yang, Volker Pohl, Holger Boche
Phase Retrieval via Structured Modulations in Paley-Wiener Spaces
Submitted to SAMPTA 2013
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper considers the recovery of continuous time signals from the magnitude of its samples. It uses a combination of structured modulation and oversampling and provides sufficient conditions on the signal and the sampling system such that signal recovery is possible. In particular, it is shown that an average sampling rate of four times the Nyquist rate is sufficient to reconstruct a signal from its magnitude measurements.
[ { "created": "Mon, 18 Feb 2013 13:12:54 GMT", "version": "v1" } ]
2013-02-19
[ [ "Yang", "Fanny", "" ], [ "Pohl", "Volker", "" ], [ "Boche", "Holger", "" ] ]
This paper considers the recovery of continuous time signals from the magnitude of its samples. It uses a combination of structured modulation and oversampling and provides sufficient conditions on the signal and the sampling system such that signal recovery is possible. In particular, it is shown that an average sampling rate of four times the Nyquist rate is sufficient to reconstruct a signal from its magnitude measurements.
1605.07322
Asahi Takaoka
Asahi Takaoka
Recognizing Simple-Triangle Graphs by Restricted 2-Chain Subgraph Cover
13 pages, 14 figures, the Author's accepted version of a paper in WALCOM 2017, Keywords: Chain cover, Graph sandwich problem, PI graphs, Simple-triangle graphs, Threshold dimension 2 graphs
WALCOM: Algorithms and Computation. Volume 10167 of Lecture Notes in Computer Science (2017) 177-189
10.1007/978-3-319-53925-6_14
null
cs.DM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A simple-triangle graph (also known as a PI graph) is the intersection graph of a family of triangles defined by a point on a horizontal line and an interval on another horizontal line. The recognition problem for simple-triangle graphs was a longstanding open problem, and recently a polynomial-time algorithm has been given [G. B. Mertzios, The Recognition of Simple-Triangle Graphs and of Linear-Interval Orders is Polynomial, SIAM J. Discrete Math., 29(3):1150--1185, 2015]. Along with the approach of this paper, we show a simpler recognition algorithm for simple-triangle graphs. To do this, we provide a polynomial-time algorithm to solve the following problem: Given a bipartite graph $G$ and a set $F$ of edges of $G$, find a 2-chain subgraph cover of $G$ such that one of two chain subgraphs has no edges in $F$.
[ { "created": "Tue, 24 May 2016 07:26:39 GMT", "version": "v1" }, { "created": "Mon, 3 Apr 2017 05:31:52 GMT", "version": "v2" } ]
2017-04-04
[ [ "Takaoka", "Asahi", "" ] ]
A simple-triangle graph (also known as a PI graph) is the intersection graph of a family of triangles defined by a point on a horizontal line and an interval on another horizontal line. The recognition problem for simple-triangle graphs was a longstanding open problem, and recently a polynomial-time algorithm has been given [G. B. Mertzios, The Recognition of Simple-Triangle Graphs and of Linear-Interval Orders is Polynomial, SIAM J. Discrete Math., 29(3):1150--1185, 2015]. Along with the approach of this paper, we show a simpler recognition algorithm for simple-triangle graphs. To do this, we provide a polynomial-time algorithm to solve the following problem: Given a bipartite graph $G$ and a set $F$ of edges of $G$, find a 2-chain subgraph cover of $G$ such that one of two chain subgraphs has no edges in $F$.
2108.00918
Kai Yue
Kai Yue, Richeng Jin, Chau-Wai Wong, Huaiyu Dai
Communication-Efficient Federated Learning via Predictive Coding
Accepted by JSTSP
null
null
null
cs.DC cs.AI cs.LG eess.SP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Federated learning can enable remote workers to collaboratively train a shared machine learning model while allowing training data to be kept locally. In the use case of wireless mobile devices, the communication overhead is a critical bottleneck due to limited power and bandwidth. Prior work has utilized various data compression tools such as quantization and sparsification to reduce the overhead. In this paper, we propose a predictive coding based compression scheme for federated learning. The scheme has shared prediction functions among all devices and allows each worker to transmit a compressed residual vector derived from the reference. In each communication round, we select the predictor and quantizer based on the rate-distortion cost, and further reduce the redundancy with entropy coding. Extensive simulations reveal that the communication cost can be reduced up to 99% with even better learning performance when compared with other baseline methods.
[ { "created": "Mon, 2 Aug 2021 14:12:19 GMT", "version": "v1" }, { "created": "Sun, 9 Jan 2022 01:20:06 GMT", "version": "v2" } ]
2022-01-11
[ [ "Yue", "Kai", "" ], [ "Jin", "Richeng", "" ], [ "Wong", "Chau-Wai", "" ], [ "Dai", "Huaiyu", "" ] ]
Federated learning can enable remote workers to collaboratively train a shared machine learning model while allowing training data to be kept locally. In the use case of wireless mobile devices, the communication overhead is a critical bottleneck due to limited power and bandwidth. Prior work has utilized various data compression tools such as quantization and sparsification to reduce the overhead. In this paper, we propose a predictive coding based compression scheme for federated learning. The scheme has shared prediction functions among all devices and allows each worker to transmit a compressed residual vector derived from the reference. In each communication round, we select the predictor and quantizer based on the rate-distortion cost, and further reduce the redundancy with entropy coding. Extensive simulations reveal that the communication cost can be reduced up to 99% with even better learning performance when compared with other baseline methods.
2305.10349
Christopher MacLellan
Lane Lawley and Christopher J. MacLellan
Interactive Learning of Hierarchical Tasks from Dialog with GPT
5 pages, 3 figures
null
null
null
cs.HC cs.AI cs.CL
http://creativecommons.org/licenses/by/4.0/
We present a system for interpretable, symbolic, interactive task learning from dialog using a GPT model as a conversational front-end. The learned tasks are represented as hierarchical decompositions of predicate-argument structures with scoped variable arguments. By using a GPT model to convert interactive dialog into a semantic representation, and then recursively asking for definitions of unknown steps, we show that hierarchical task knowledge can be acquired and re-used in a natural and unrestrained conversational environment. We compare our system to a similar architecture using a more conventional parser and show that our system tolerates a much wider variety of linguistic variance.
[ { "created": "Wed, 17 May 2023 16:32:40 GMT", "version": "v1" } ]
2023-05-18
[ [ "Lawley", "Lane", "" ], [ "MacLellan", "Christopher J.", "" ] ]
We present a system for interpretable, symbolic, interactive task learning from dialog using a GPT model as a conversational front-end. The learned tasks are represented as hierarchical decompositions of predicate-argument structures with scoped variable arguments. By using a GPT model to convert interactive dialog into a semantic representation, and then recursively asking for definitions of unknown steps, we show that hierarchical task knowledge can be acquired and re-used in a natural and unrestrained conversational environment. We compare our system to a similar architecture using a more conventional parser and show that our system tolerates a much wider variety of linguistic variance.
2108.03596
Shuang Li
Qi Wen, Shuang Li, Bingfeng Han, Yi Yuan
ZiGAN: Fine-grained Chinese Calligraphy Font Generation via a Few-shot Style Transfer Approach
Accepted at ACM MM 2021
null
10.1145/3474085.3475225
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Chinese character style transfer is a very challenging problem because of the complexity of the glyph shapes or underlying structures and large numbers of existed characters, when comparing with English letters. Moreover, the handwriting of calligraphy masters has a more irregular stroke and is difficult to obtain in real-world scenarios. Recently, several GAN-based methods have been proposed for font synthesis, but some of them require numerous reference data and the other part of them have cumbersome preprocessing steps to divide the character into different parts to be learned and transferred separately. In this paper, we propose a simple but powerful end-to-end Chinese calligraphy font generation framework ZiGAN, which does not require any manual operation or redundant preprocessing to generate fine-grained target-style characters with few-shot references. To be specific, a few paired samples from different character styles are leveraged to attain a fine-grained correlation between structures underlying different glyphs. To capture valuable style knowledge in target and strengthen the coarse-grained understanding of character content, we utilize multiple unpaired samples to align the feature distributions belonging to different character styles. By doing so, only a few target Chinese calligraphy characters are needed to generated expected style transferred characters. Experiments demonstrate that our method has a state-of-the-art generalization ability in few-shot Chinese character style transfer.
[ { "created": "Sun, 8 Aug 2021 09:50:20 GMT", "version": "v1" } ]
2021-08-10
[ [ "Wen", "Qi", "" ], [ "Li", "Shuang", "" ], [ "Han", "Bingfeng", "" ], [ "Yuan", "Yi", "" ] ]
Chinese character style transfer is a very challenging problem because of the complexity of the glyph shapes or underlying structures and large numbers of existed characters, when comparing with English letters. Moreover, the handwriting of calligraphy masters has a more irregular stroke and is difficult to obtain in real-world scenarios. Recently, several GAN-based methods have been proposed for font synthesis, but some of them require numerous reference data and the other part of them have cumbersome preprocessing steps to divide the character into different parts to be learned and transferred separately. In this paper, we propose a simple but powerful end-to-end Chinese calligraphy font generation framework ZiGAN, which does not require any manual operation or redundant preprocessing to generate fine-grained target-style characters with few-shot references. To be specific, a few paired samples from different character styles are leveraged to attain a fine-grained correlation between structures underlying different glyphs. To capture valuable style knowledge in target and strengthen the coarse-grained understanding of character content, we utilize multiple unpaired samples to align the feature distributions belonging to different character styles. By doing so, only a few target Chinese calligraphy characters are needed to generated expected style transferred characters. Experiments demonstrate that our method has a state-of-the-art generalization ability in few-shot Chinese character style transfer.
2310.05627
Shuai Jia
Yujie Ding, Shuai Jia, Tianyi Ma, Bingcheng Mao, Xiuze Zhou, Liuliu Li and Dongming Han
Integrating Stock Features and Global Information via Large Language Models for Enhanced Stock Return Prediction
8 pages, International Joint Conferences on Artificial Intelligence
International Joint Conferences on Artificial Intelligence,2023
null
null
cs.CL cs.LG q-fin.ST
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The remarkable achievements and rapid advancements of Large Language Models (LLMs) such as ChatGPT and GPT-4 have showcased their immense potential in quantitative investment. Traders can effectively leverage these LLMs to analyze financial news and predict stock returns accurately. However, integrating LLMs into existing quantitative models presents two primary challenges: the insufficient utilization of semantic information embedded within LLMs and the difficulties in aligning the latent information within LLMs with pre-existing quantitative stock features. We propose a novel framework consisting of two components to surmount these challenges. The first component, the Local-Global (LG) model, introduces three distinct strategies for modeling global information. These approaches are grounded respectively on stock features, the capabilities of LLMs, and a hybrid method combining the two paradigms. The second component, Self-Correlated Reinforcement Learning (SCRL), focuses on aligning the embeddings of financial news generated by LLMs with stock features within the same semantic space. By implementing our framework, we have demonstrated superior performance in Rank Information Coefficient and returns, particularly compared to models relying only on stock features in the China A-share market.
[ { "created": "Mon, 9 Oct 2023 11:34:18 GMT", "version": "v1" } ]
2023-10-11
[ [ "Ding", "Yujie", "" ], [ "Jia", "Shuai", "" ], [ "Ma", "Tianyi", "" ], [ "Mao", "Bingcheng", "" ], [ "Zhou", "Xiuze", "" ], [ "Li", "Liuliu", "" ], [ "Han", "Dongming", "" ] ]
The remarkable achievements and rapid advancements of Large Language Models (LLMs) such as ChatGPT and GPT-4 have showcased their immense potential in quantitative investment. Traders can effectively leverage these LLMs to analyze financial news and predict stock returns accurately. However, integrating LLMs into existing quantitative models presents two primary challenges: the insufficient utilization of semantic information embedded within LLMs and the difficulties in aligning the latent information within LLMs with pre-existing quantitative stock features. We propose a novel framework consisting of two components to surmount these challenges. The first component, the Local-Global (LG) model, introduces three distinct strategies for modeling global information. These approaches are grounded respectively on stock features, the capabilities of LLMs, and a hybrid method combining the two paradigms. The second component, Self-Correlated Reinforcement Learning (SCRL), focuses on aligning the embeddings of financial news generated by LLMs with stock features within the same semantic space. By implementing our framework, we have demonstrated superior performance in Rank Information Coefficient and returns, particularly compared to models relying only on stock features in the China A-share market.
1612.05568
Naoise Holohan
Naoise Holohan, Douglas J. Leith, Oliver Mason
Optimal Differentially Private Mechanisms for Randomised Response
null
null
10.1109/TIFS.2017.2718487
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We examine a generalised Randomised Response (RR) technique in the context of differential privacy and examine the optimality of such mechanisms. Strict and relaxed differential privacy are considered for binary outputs. By examining the error of a statistical estimator, we present closed solutions for the optimal mechanism(s) in both cases. The optimal mechanism is also given for the specific case of the original RR technique as introduced by Warner in 1965.
[ { "created": "Fri, 16 Dec 2016 17:38:49 GMT", "version": "v1" } ]
2017-10-05
[ [ "Holohan", "Naoise", "" ], [ "Leith", "Douglas J.", "" ], [ "Mason", "Oliver", "" ] ]
We examine a generalised Randomised Response (RR) technique in the context of differential privacy and examine the optimality of such mechanisms. Strict and relaxed differential privacy are considered for binary outputs. By examining the error of a statistical estimator, we present closed solutions for the optimal mechanism(s) in both cases. The optimal mechanism is also given for the specific case of the original RR technique as introduced by Warner in 1965.
2212.09100
Abdullah Hamdi
Abdullah Hamdi, Bernard Ghanem, Matthias Nie{\ss}ner
SPARF: Large-Scale Learning of 3D Sparse Radiance Fields from Few Input Images
published at ICCV 2023 workshop proceedings
null
10.1109/ICCVW60793.2023.00315
null
cs.CV cs.LG
http://creativecommons.org/licenses/by/4.0/
Recent advances in Neural Radiance Fields (NeRFs) treat the problem of novel view synthesis as Sparse Radiance Field (SRF) optimization using sparse voxels for efficient and fast rendering (plenoxels,InstantNGP). In order to leverage machine learning and adoption of SRFs as a 3D representation, we present SPARF, a large-scale ShapeNet-based synthetic dataset for novel view synthesis consisting of $\sim$ 17 million images rendered from nearly 40,000 shapes at high resolution (400 X 400 pixels). The dataset is orders of magnitude larger than existing synthetic datasets for novel view synthesis and includes more than one million 3D-optimized radiance fields with multiple voxel resolutions. Furthermore, we propose a novel pipeline (SuRFNet) that learns to generate sparse voxel radiance fields from only few views. This is done by using the densely collected SPARF dataset and 3D sparse convolutions. SuRFNet employs partial SRFs from few/one images and a specialized SRF loss to learn to generate high-quality sparse voxel radiance fields that can be rendered from novel views. Our approach achieves state-of-the-art results in the task of unconstrained novel view synthesis based on few views on ShapeNet as compared to recent baselines. The SPARF dataset is made public with the code and models on the project website https://abdullahamdi.com/sparf/ .
[ { "created": "Sun, 18 Dec 2022 14:56:22 GMT", "version": "v1" }, { "created": "Tue, 14 Mar 2023 12:08:11 GMT", "version": "v2" }, { "created": "Mon, 21 Aug 2023 12:53:09 GMT", "version": "v3" } ]
2024-02-26
[ [ "Hamdi", "Abdullah", "" ], [ "Ghanem", "Bernard", "" ], [ "Nießner", "Matthias", "" ] ]
Recent advances in Neural Radiance Fields (NeRFs) treat the problem of novel view synthesis as Sparse Radiance Field (SRF) optimization using sparse voxels for efficient and fast rendering (plenoxels,InstantNGP). In order to leverage machine learning and adoption of SRFs as a 3D representation, we present SPARF, a large-scale ShapeNet-based synthetic dataset for novel view synthesis consisting of $\sim$ 17 million images rendered from nearly 40,000 shapes at high resolution (400 X 400 pixels). The dataset is orders of magnitude larger than existing synthetic datasets for novel view synthesis and includes more than one million 3D-optimized radiance fields with multiple voxel resolutions. Furthermore, we propose a novel pipeline (SuRFNet) that learns to generate sparse voxel radiance fields from only few views. This is done by using the densely collected SPARF dataset and 3D sparse convolutions. SuRFNet employs partial SRFs from few/one images and a specialized SRF loss to learn to generate high-quality sparse voxel radiance fields that can be rendered from novel views. Our approach achieves state-of-the-art results in the task of unconstrained novel view synthesis based on few views on ShapeNet as compared to recent baselines. The SPARF dataset is made public with the code and models on the project website https://abdullahamdi.com/sparf/ .
1105.4702
Joachim Selke
Joachim Selke and Wolf-Tilo Balke
Exploiting Conceptual Knowledge for Querying Information Systems
International Conference on Philosophy's Relevance in Information Science (PRIS), Paderborn, Germany, 2008
null
null
null
cs.IR cs.DB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Whereas today's information systems are well-equipped for efficient query handling, their strict mathematical foundations hamper their use for everyday tasks. In daily life, people expect information to be offered in a personalized and focused way. But currently, personalization in digital systems still only takes explicit knowledge into account and does not yet process conceptual information often naturally implied by users. We discuss how to bridge the gap between users and today's systems, building on results from cognitive psychology.
[ { "created": "Tue, 24 May 2011 08:01:15 GMT", "version": "v1" } ]
2011-05-25
[ [ "Selke", "Joachim", "" ], [ "Balke", "Wolf-Tilo", "" ] ]
Whereas today's information systems are well-equipped for efficient query handling, their strict mathematical foundations hamper their use for everyday tasks. In daily life, people expect information to be offered in a personalized and focused way. But currently, personalization in digital systems still only takes explicit knowledge into account and does not yet process conceptual information often naturally implied by users. We discuss how to bridge the gap between users and today's systems, building on results from cognitive psychology.
2305.09890
Young-Joo Han
Young-Joo Han and Ha-Jin Yu
SS-BSN: Attentive Blind-Spot Network for Self-Supervised Denoising with Nonlocal Self-Similarity
Accepted to IJCAI 2023
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recently, numerous studies have been conducted on supervised learning-based image denoising methods. However, these methods rely on large-scale noisy-clean image pairs, which are difficult to obtain in practice. Denoising methods with self-supervised training that can be trained with only noisy images have been proposed to address the limitation. These methods are based on the convolutional neural network (CNN) and have shown promising performance. However, CNN-based methods do not consider using nonlocal self-similarities essential in the traditional method, which can cause performance limitations. This paper presents self-similarity attention (SS-Attention), a novel self-attention module that can capture nonlocal self-similarities to solve the problem. We focus on designing a lightweight self-attention module in a pixel-wise manner, which is nearly impossible to implement using the classic self-attention module due to the quadratically increasing complexity with spatial resolution. Furthermore, we integrate SS-Attention into the blind-spot network called self-similarity-based blind-spot network (SS-BSN). We conduct the experiments on real-world image denoising tasks. The proposed method quantitatively and qualitatively outperforms state-of-the-art methods in self-supervised denoising on the Smartphone Image Denoising Dataset (SIDD) and Darmstadt Noise Dataset (DND) benchmark datasets.
[ { "created": "Wed, 17 May 2023 01:55:45 GMT", "version": "v1" } ]
2023-05-18
[ [ "Han", "Young-Joo", "" ], [ "Yu", "Ha-Jin", "" ] ]
Recently, numerous studies have been conducted on supervised learning-based image denoising methods. However, these methods rely on large-scale noisy-clean image pairs, which are difficult to obtain in practice. Denoising methods with self-supervised training that can be trained with only noisy images have been proposed to address the limitation. These methods are based on the convolutional neural network (CNN) and have shown promising performance. However, CNN-based methods do not consider using nonlocal self-similarities essential in the traditional method, which can cause performance limitations. This paper presents self-similarity attention (SS-Attention), a novel self-attention module that can capture nonlocal self-similarities to solve the problem. We focus on designing a lightweight self-attention module in a pixel-wise manner, which is nearly impossible to implement using the classic self-attention module due to the quadratically increasing complexity with spatial resolution. Furthermore, we integrate SS-Attention into the blind-spot network called self-similarity-based blind-spot network (SS-BSN). We conduct the experiments on real-world image denoising tasks. The proposed method quantitatively and qualitatively outperforms state-of-the-art methods in self-supervised denoising on the Smartphone Image Denoising Dataset (SIDD) and Darmstadt Noise Dataset (DND) benchmark datasets.
1904.00077
Dimitar Ho
Dimitar Ho and John C. Doyle
Scalable Robust Adaptive Control from the System Level Perspective
null
null
null
null
cs.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We will present a new general framework for robust and adaptive control that allows for distributed and scalable learning and control of large systems of interconnected linear subsystems. The control method is demonstrated for a linear time-invariant system with bounded parameter uncertainties, disturbances and noise. The presented scheme continuously collects measurements to reduce the uncertainty about the system parameters and adapts dynamic robust controllers online in a stable and performance-improving way. A key enabler for our approach is choosing a time-varying dynamic controller implementation, inspired by recent work on System Level Synthesis. We leverage a new robustness result for this implementation to propose a general robust adaptive control algorithm. In particular, the algorithm allows us to impose communication and delay constraints on the controller implementation and is formulated as a sequence of robust optimization problems that can be solved in a distributed manner. The proposed control methodology performs particularly well when the interconnection between systems is sparse and the dynamics of local regions of subsystems depend only on a small number of parameters. As we will show on a five-dimensional exemplary chain-system, the algorithm can utilize system structure to efficiently learn and control the entire system while respecting communication and implementation constraints. Moreover, although current theoretical results require the assumption of small initial uncertainties to guarantee robustness, we will present simulations that show good closed-loop performance even in the case of large uncertainties, which suggests that this assumption is not critical for the presented technique and future work will focus on providing less conservative guarantees.
[ { "created": "Fri, 29 Mar 2019 20:09:39 GMT", "version": "v1" } ]
2019-04-02
[ [ "Ho", "Dimitar", "" ], [ "Doyle", "John C.", "" ] ]
We will present a new general framework for robust and adaptive control that allows for distributed and scalable learning and control of large systems of interconnected linear subsystems. The control method is demonstrated for a linear time-invariant system with bounded parameter uncertainties, disturbances and noise. The presented scheme continuously collects measurements to reduce the uncertainty about the system parameters and adapts dynamic robust controllers online in a stable and performance-improving way. A key enabler for our approach is choosing a time-varying dynamic controller implementation, inspired by recent work on System Level Synthesis. We leverage a new robustness result for this implementation to propose a general robust adaptive control algorithm. In particular, the algorithm allows us to impose communication and delay constraints on the controller implementation and is formulated as a sequence of robust optimization problems that can be solved in a distributed manner. The proposed control methodology performs particularly well when the interconnection between systems is sparse and the dynamics of local regions of subsystems depend only on a small number of parameters. As we will show on a five-dimensional exemplary chain-system, the algorithm can utilize system structure to efficiently learn and control the entire system while respecting communication and implementation constraints. Moreover, although current theoretical results require the assumption of small initial uncertainties to guarantee robustness, we will present simulations that show good closed-loop performance even in the case of large uncertainties, which suggests that this assumption is not critical for the presented technique and future work will focus on providing less conservative guarantees.
2202.00379
Thien-Minh Nguyen
Thien-Minh Nguyen, Shenghai Yuan, Muqing Cao, Yang Lyu, Thien Hoang Nguyen, Lihua Xie
NTU VIRAL: A Visual-Inertial-Ranging-Lidar Dataset, From an Aerial Vehicle Viewpoint
IJRR 2021
null
10.1177/02783649211052312
null
cs.RO cs.SY eess.SY
http://creativecommons.org/licenses/by/4.0/
In recent years, autonomous robots have become ubiquitous in research and daily life. Among many factors, public datasets play an important role in the progress of this field, as they waive the tall order of initial investment in hardware and manpower. However, for research on autonomous aerial systems, there appears to be a relative lack of public datasets on par with those used for autonomous driving and ground robots. Thus, to fill in this gap, we conduct a data collection exercise on an aerial platform equipped with an extensive and unique set of sensors: two 3D lidars, two hardware-synchronized global-shutter cameras, multiple Inertial Measurement Units (IMUs), and especially, multiple Ultra-wideband (UWB) ranging units. The comprehensive sensor suite resembles that of an autonomous driving car, but features distinct and challenging characteristics of aerial operations. We record multiple datasets in several challenging indoor and outdoor conditions. Calibration results and ground truth from a high-accuracy laser tracker are also included in each package. All resources can be accessed via our webpage https://ntu-aris.github.io/ntu_viral_dataset.
[ { "created": "Tue, 1 Feb 2022 12:46:52 GMT", "version": "v1" } ]
2022-02-02
[ [ "Nguyen", "Thien-Minh", "" ], [ "Yuan", "Shenghai", "" ], [ "Cao", "Muqing", "" ], [ "Lyu", "Yang", "" ], [ "Nguyen", "Thien Hoang", "" ], [ "Xie", "Lihua", "" ] ]
In recent years, autonomous robots have become ubiquitous in research and daily life. Among many factors, public datasets play an important role in the progress of this field, as they waive the tall order of initial investment in hardware and manpower. However, for research on autonomous aerial systems, there appears to be a relative lack of public datasets on par with those used for autonomous driving and ground robots. Thus, to fill in this gap, we conduct a data collection exercise on an aerial platform equipped with an extensive and unique set of sensors: two 3D lidars, two hardware-synchronized global-shutter cameras, multiple Inertial Measurement Units (IMUs), and especially, multiple Ultra-wideband (UWB) ranging units. The comprehensive sensor suite resembles that of an autonomous driving car, but features distinct and challenging characteristics of aerial operations. We record multiple datasets in several challenging indoor and outdoor conditions. Calibration results and ground truth from a high-accuracy laser tracker are also included in each package. All resources can be accessed via our webpage https://ntu-aris.github.io/ntu_viral_dataset.
2010.04892
Jiakun Liu
Jiakun Liu, Xin Xia, David Lo, Haoxiang Zhang, Ying Zou, Ahmed E. Hassan, and Shanping Li
Broken External Links on Stack Overflow
null
null
10.1109/TSE.2021.3086494
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Stack Overflow hosts valuable programming-related knowledge with 11,926,354 links that reference to the third-party websites. The links that reference to the resources hosted outside the Stack Overflow websites extend the Stack Overflow knowledge base substantially. However, with the rapid development of programming-related knowledge, many resources hosted on the Internet are not available anymore. Based on our analysis of the Stack Overflow data that was released on Jun. 2, 2019, 14.2% of the links on Stack Overflow are broken links. The broken links on Stack Overflow can obstruct viewers from obtaining desired programming-related knowledge, and potentially damage the reputation of the Stack Overflow as viewers might regard the posts with broken links as obsolete. In this paper, we characterize the broken links on Stack Overflow. 65% of the broken links in our sampled questions are used to show examples, e.g., code examples. 70% of the broken links in our sampled answers are used to provide supporting information, e.g., explaining a certain concept and describing a step to solve a problem. Only 1.67% of the posts with broken links are highlighted as such by viewers in the posts' comments. Only 5.8% of the posts with broken links removed the broken links. Viewers cannot fully rely on the vote scores to detect broken links, as broken links are common across posts with different vote scores. The websites that host resources that can be maintained by their users are referenced by broken links the most on Stack Overflow -- a prominent example of such websites is GitHub. The posts and comments related to the web technologies, i.e., JavaScript, HTML, CSS, and jQuery, are associated with more broken links. Based on our findings, we shed lights for future directions and provide recommendations for practitioners and researchers.
[ { "created": "Sat, 10 Oct 2020 03:39:29 GMT", "version": "v1" } ]
2022-02-15
[ [ "Liu", "Jiakun", "" ], [ "Xia", "Xin", "" ], [ "Lo", "David", "" ], [ "Zhang", "Haoxiang", "" ], [ "Zou", "Ying", "" ], [ "Hassan", "Ahmed E.", "" ], [ "Li", "Shanping", "" ] ]
Stack Overflow hosts valuable programming-related knowledge with 11,926,354 links that reference to the third-party websites. The links that reference to the resources hosted outside the Stack Overflow websites extend the Stack Overflow knowledge base substantially. However, with the rapid development of programming-related knowledge, many resources hosted on the Internet are not available anymore. Based on our analysis of the Stack Overflow data that was released on Jun. 2, 2019, 14.2% of the links on Stack Overflow are broken links. The broken links on Stack Overflow can obstruct viewers from obtaining desired programming-related knowledge, and potentially damage the reputation of the Stack Overflow as viewers might regard the posts with broken links as obsolete. In this paper, we characterize the broken links on Stack Overflow. 65% of the broken links in our sampled questions are used to show examples, e.g., code examples. 70% of the broken links in our sampled answers are used to provide supporting information, e.g., explaining a certain concept and describing a step to solve a problem. Only 1.67% of the posts with broken links are highlighted as such by viewers in the posts' comments. Only 5.8% of the posts with broken links removed the broken links. Viewers cannot fully rely on the vote scores to detect broken links, as broken links are common across posts with different vote scores. The websites that host resources that can be maintained by their users are referenced by broken links the most on Stack Overflow -- a prominent example of such websites is GitHub. The posts and comments related to the web technologies, i.e., JavaScript, HTML, CSS, and jQuery, are associated with more broken links. Based on our findings, we shed lights for future directions and provide recommendations for practitioners and researchers.
1704.05817
Wenbin Li
Wenbin Li, Da Chen, Zhihan Lv, Yan Yan, Darren Cosker
Learn to Model Motion from Blurry Footages
Preprint of our paper accepted by Pattern Recognition
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
It is difficult to recover the motion field from a real-world footage given a mixture of camera shake and other photometric effects. In this paper we propose a hybrid framework by interleaving a Convolutional Neural Network (CNN) and a traditional optical flow energy. We first conduct a CNN architecture using a novel learnable directional filtering layer. Such layer encodes the angle and distance similarity matrix between blur and camera motion, which is able to enhance the blur features of the camera-shake footages. The proposed CNNs are then integrated into an iterative optical flow framework, which enable the capability of modelling and solving both the blind deconvolution and the optical flow estimation problems simultaneously. Our framework is trained end-to-end on a synthetic dataset and yields competitive precision and performance against the state-of-the-art approaches.
[ { "created": "Wed, 19 Apr 2017 16:54:54 GMT", "version": "v1" } ]
2017-04-20
[ [ "Li", "Wenbin", "" ], [ "Chen", "Da", "" ], [ "Lv", "Zhihan", "" ], [ "Yan", "Yan", "" ], [ "Cosker", "Darren", "" ] ]
It is difficult to recover the motion field from a real-world footage given a mixture of camera shake and other photometric effects. In this paper we propose a hybrid framework by interleaving a Convolutional Neural Network (CNN) and a traditional optical flow energy. We first conduct a CNN architecture using a novel learnable directional filtering layer. Such layer encodes the angle and distance similarity matrix between blur and camera motion, which is able to enhance the blur features of the camera-shake footages. The proposed CNNs are then integrated into an iterative optical flow framework, which enable the capability of modelling and solving both the blind deconvolution and the optical flow estimation problems simultaneously. Our framework is trained end-to-end on a synthetic dataset and yields competitive precision and performance against the state-of-the-art approaches.
2212.03490
Yue Ma
Yue Ma, Tianyu Yang, Yin Shan, Xiu Li
SimVTP: Simple Video Text Pre-training with Masked Autoencoders
Github: https://github.com/mayuelala/SimVTP
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents SimVTP: a Simple Video-Text Pretraining framework via masked autoencoders. We randomly mask out the spatial-temporal tubes of input video and the word tokens of input text and then feed them into a unified autencoder to reconstruct the missing pixels and words. Our SimVTP has several properties: 1) Thanks to the unified autoencoder, SimVTP reconstructs the masked signal of one modality with the help from another modality, which implicitly learns the cross-modal alignment between video tubes and text tokens. 2) SimVTP not only benefits from a high video masking ratio (e.g. 90%) due to the temporal redundancy of video, but also needs a high text masking ratio (e.g. 75%), which is much higher than BERT (e.g. 15%), to achieve optimal performance. This is because the aid of video modality makes text reconstruction less challenging, which thus needs a higher mask ratio to make the pretext harder for useful feature learning. 3) Equipping SimVTP with video-text contrastive learning (VTC) and video-text matching (VTM), which are two commonly used cross-modal training strategies, could further improve the transferable performance significantly. 4) SimVTP is dataefficent, e.g., pre-training only on 10% data of WebVid-2M, SimVTP achieves surprisingly good results (43.8 R@1) on MSRVTT, which is far above recent state-of-the-art methods pre-trained on both CC3M and WebVid-2M. We transfer our pre-trained model to various downstream tasks and achieve superior performance. The codes and models will be released at https://github.com/mayuelala/SimVTP.
[ { "created": "Wed, 7 Dec 2022 07:14:22 GMT", "version": "v1" } ]
2022-12-08
[ [ "Ma", "Yue", "" ], [ "Yang", "Tianyu", "" ], [ "Shan", "Yin", "" ], [ "Li", "Xiu", "" ] ]
This paper presents SimVTP: a Simple Video-Text Pretraining framework via masked autoencoders. We randomly mask out the spatial-temporal tubes of input video and the word tokens of input text and then feed them into a unified autencoder to reconstruct the missing pixels and words. Our SimVTP has several properties: 1) Thanks to the unified autoencoder, SimVTP reconstructs the masked signal of one modality with the help from another modality, which implicitly learns the cross-modal alignment between video tubes and text tokens. 2) SimVTP not only benefits from a high video masking ratio (e.g. 90%) due to the temporal redundancy of video, but also needs a high text masking ratio (e.g. 75%), which is much higher than BERT (e.g. 15%), to achieve optimal performance. This is because the aid of video modality makes text reconstruction less challenging, which thus needs a higher mask ratio to make the pretext harder for useful feature learning. 3) Equipping SimVTP with video-text contrastive learning (VTC) and video-text matching (VTM), which are two commonly used cross-modal training strategies, could further improve the transferable performance significantly. 4) SimVTP is dataefficent, e.g., pre-training only on 10% data of WebVid-2M, SimVTP achieves surprisingly good results (43.8 R@1) on MSRVTT, which is far above recent state-of-the-art methods pre-trained on both CC3M and WebVid-2M. We transfer our pre-trained model to various downstream tasks and achieve superior performance. The codes and models will be released at https://github.com/mayuelala/SimVTP.
1808.02997
Felipe Campelo
Felipe Campelo and Fernanda Takahashi
Sample size estimation for power and accuracy in the experimental comparison of algorithms
Main text: 31 pages, 5 figures; Supplemental materials: 20 pages, 3 figures; Submitted to the Journal of Heuristics on October 2017
null
10.1007/s10732-018-9396-7
null
cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Experimental comparisons of performance represent an important aspect of research on optimization algorithms. In this work we present a methodology for defining the required sample sizes for designing experiments with desired statistical properties for the comparison of two methods on a given problem class. The proposed approach allows the experimenter to define desired levels of accuracy for estimates of mean performance differences on individual problem instances, as well as the desired statistical power for comparing mean performances over a problem class of interest. The method calculates the required number of problem instances, and runs the algorithms on each test instance so that the accuracy of the estimated differences in performance is controlled at the predefined level. Two examples illustrate the application of the proposed method, and its ability to achieve the desired statistical properties with a methodologically sound definition of the relevant sample sizes.
[ { "created": "Thu, 9 Aug 2018 02:17:52 GMT", "version": "v1" }, { "created": "Mon, 15 Oct 2018 14:52:32 GMT", "version": "v2" } ]
2018-10-16
[ [ "Campelo", "Felipe", "" ], [ "Takahashi", "Fernanda", "" ] ]
Experimental comparisons of performance represent an important aspect of research on optimization algorithms. In this work we present a methodology for defining the required sample sizes for designing experiments with desired statistical properties for the comparison of two methods on a given problem class. The proposed approach allows the experimenter to define desired levels of accuracy for estimates of mean performance differences on individual problem instances, as well as the desired statistical power for comparing mean performances over a problem class of interest. The method calculates the required number of problem instances, and runs the algorithms on each test instance so that the accuracy of the estimated differences in performance is controlled at the predefined level. Two examples illustrate the application of the proposed method, and its ability to achieve the desired statistical properties with a methodologically sound definition of the relevant sample sizes.
1808.08181
Boxiang Dong
Haipei Sun, Boxiang Dong, Hui (Wendy) Wang, Ting Yu, Zhan Qin
Truth Inference on Sparse Crowdsourcing Data with Local Differential Privacy
null
null
null
null
cs.CR cs.DB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Crowdsourcing has arisen as a new problem-solving paradigm for tasks that are difficult for computers but easy for humans. However, since the answers collected from the recruited participants (workers) may contain sensitive information, crowdsourcing raises serious privacy concerns. In this paper, we investigate the problem of protecting answer privacy under local differential privacy (LDP), by which individual workers randomize their answers independently and send the perturbed answers to the task requester. The utility goal is to enable to infer the true answer (i.e., truth) from the perturbed data with high accuracy. One of the challenges of LDP perturbation is the sparsity of worker answers (i.e., each worker only answers a small number of tasks). Simple extension of the existing approaches (e.g., Laplace perturbation and randomized response) may incur large error of truth inference on sparse data. Thus we design an efficient new matrix factorization (MF) algorithm under LDP. We prove that our MF algorithm can provide both LDP guarantee and small error of truth inference, regardless of the sparsity of worker answers. We perform extensive experiments on real-world and synthetic datasets, and demonstrate that the MF algorithm performs better than the existing LDP algorithms on sparse crowdsourcing data.
[ { "created": "Fri, 24 Aug 2018 15:48:06 GMT", "version": "v1" } ]
2018-08-27
[ [ "Sun", "Haipei", "", "Wendy" ], [ "Dong", "Boxiang", "", "Wendy" ], [ "Hui", "", "", "Wendy" ], [ "Wang", "", "" ], [ "Yu", "Ting", "" ], [ "Qin", "Zhan", "" ] ]
Crowdsourcing has arisen as a new problem-solving paradigm for tasks that are difficult for computers but easy for humans. However, since the answers collected from the recruited participants (workers) may contain sensitive information, crowdsourcing raises serious privacy concerns. In this paper, we investigate the problem of protecting answer privacy under local differential privacy (LDP), by which individual workers randomize their answers independently and send the perturbed answers to the task requester. The utility goal is to enable to infer the true answer (i.e., truth) from the perturbed data with high accuracy. One of the challenges of LDP perturbation is the sparsity of worker answers (i.e., each worker only answers a small number of tasks). Simple extension of the existing approaches (e.g., Laplace perturbation and randomized response) may incur large error of truth inference on sparse data. Thus we design an efficient new matrix factorization (MF) algorithm under LDP. We prove that our MF algorithm can provide both LDP guarantee and small error of truth inference, regardless of the sparsity of worker answers. We perform extensive experiments on real-world and synthetic datasets, and demonstrate that the MF algorithm performs better than the existing LDP algorithms on sparse crowdsourcing data.
2011.02749
Busra Tegin
Busra Tegin, Eduin E. Hernandez, Stefano Rini, Tolga M. Duman
Straggler Mitigation through Unequal Error Protection for Distributed Matrix Multiplication
6 pages, 6 figures
null
null
null
cs.IT cs.DC math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Large-scale machine learning and data mining methods routinely distribute computations across multiple agents to parallelize processing. The time required for computation at the agents is affected by the availability of local resources giving rise to the "straggler problem" in which the computation results are held back by unresponsive agents. For this problem, linear coding of the matrix sub-blocks can be used to introduce resilience toward straggling. The Parameter Server (PS) utilizes a channel code and distributes the matrices to the workers for multiplication. It then produces an approximation to the desired matrix multiplication using the results of the computations received at a given deadline. In this paper, we propose to employ Unequal Error Protection (UEP) codes to alleviate the straggler problem. The resiliency level of each sub-block is chosen according to its norm as blocks with larger norms have higher effects on the result of the matrix multiplication. We validate the effectiveness of our scheme both theoretically and through numerical evaluations. We derive a theoretical characterization of the performance of UEP using random linear codes, and compare it the case of equal error protection. We also apply the proposed coding strategy to the computation of the back-propagation step in the training of a Deep Neural Network (DNN), for which we investigate the fundamental trade-off between precision and the time required for the computations.
[ { "created": "Thu, 5 Nov 2020 10:43:32 GMT", "version": "v1" }, { "created": "Fri, 19 Mar 2021 08:24:36 GMT", "version": "v2" } ]
2021-03-22
[ [ "Tegin", "Busra", "" ], [ "Hernandez", "Eduin E.", "" ], [ "Rini", "Stefano", "" ], [ "Duman", "Tolga M.", "" ] ]
Large-scale machine learning and data mining methods routinely distribute computations across multiple agents to parallelize processing. The time required for computation at the agents is affected by the availability of local resources giving rise to the "straggler problem" in which the computation results are held back by unresponsive agents. For this problem, linear coding of the matrix sub-blocks can be used to introduce resilience toward straggling. The Parameter Server (PS) utilizes a channel code and distributes the matrices to the workers for multiplication. It then produces an approximation to the desired matrix multiplication using the results of the computations received at a given deadline. In this paper, we propose to employ Unequal Error Protection (UEP) codes to alleviate the straggler problem. The resiliency level of each sub-block is chosen according to its norm as blocks with larger norms have higher effects on the result of the matrix multiplication. We validate the effectiveness of our scheme both theoretically and through numerical evaluations. We derive a theoretical characterization of the performance of UEP using random linear codes, and compare it the case of equal error protection. We also apply the proposed coding strategy to the computation of the back-propagation step in the training of a Deep Neural Network (DNN), for which we investigate the fundamental trade-off between precision and the time required for the computations.
2011.01644
Juan Quintero
Juan Quintero and Zinaida Benenson
Understanding Usability and User Acceptance of Usage-Based Insurance from Users' View
null
null
10.1145/3366750.3366759
null
cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Intelligent Transportation Systems (ITS) cover a variety of services related to topics such as traffic control and safe driving, among others. In the context of car insurance, a recent application for ITS is known as Usage-Based Insurance (UBI). UBI refers to car insurance policies that enable insurance companies to collect individual driving data using a telematics device. Collected data is analysed and used to offer individual discounts based on driving behaviour and to provide feedback on driving performance. Although there are plenty of advertising materials about the benefits of UBI, the user acceptance and the usability of UBI systems have not received research attention so far. To this end, we conduct two user studies: semi-structured interviews with UBI users and a qualitative analysis of 186 customer inquiries from a web forum of a German insurance company. We find that under certain circumstances, UBI provokes dangerous driving behaviour. These situations could be mitigated by making UBI transparent and the feedback customisable by drivers. Moreover, the country driving conditions, the policy conditions, and the perceived driving style influence UBI acceptance.
[ { "created": "Tue, 3 Nov 2020 11:44:27 GMT", "version": "v1" } ]
2020-11-04
[ [ "Quintero", "Juan", "" ], [ "Benenson", "Zinaida", "" ] ]
Intelligent Transportation Systems (ITS) cover a variety of services related to topics such as traffic control and safe driving, among others. In the context of car insurance, a recent application for ITS is known as Usage-Based Insurance (UBI). UBI refers to car insurance policies that enable insurance companies to collect individual driving data using a telematics device. Collected data is analysed and used to offer individual discounts based on driving behaviour and to provide feedback on driving performance. Although there are plenty of advertising materials about the benefits of UBI, the user acceptance and the usability of UBI systems have not received research attention so far. To this end, we conduct two user studies: semi-structured interviews with UBI users and a qualitative analysis of 186 customer inquiries from a web forum of a German insurance company. We find that under certain circumstances, UBI provokes dangerous driving behaviour. These situations could be mitigated by making UBI transparent and the feedback customisable by drivers. Moreover, the country driving conditions, the policy conditions, and the perceived driving style influence UBI acceptance.
2312.13490
Vaggos Chatziafratis
Vaggos Chatziafratis, Piotr Indyk
Dimension-Accuracy Tradeoffs in Contrastive Embeddings for Triplets, Terminals & Top-k Nearest Neighbors
Abstract shortened for arxiv
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Metric embeddings traditionally study how to map $n$ items to a target metric space such that distance lengths are not heavily distorted; but what if we only care to preserve the relative order of the distances (and not their length)? In this paper, we are motivated by the following basic question: given triplet comparisons of the form ``item $i$ is closer to item $j$ than to item $k$,'' can we find low-dimensional Euclidean representations for the $n$ items that respect those distance comparisons? Such order-preserving embeddings naturally arise in important applications and have been studied since the 1950s, under the name of ordinal or non-metric embeddings. Our main results are: 1. Nearly-Tight Bounds on Triplet Dimension: We introduce the natural concept of triplet dimension of a dataset, and surprisingly, we show that in order for an ordinal embedding to be triplet-preserving, its dimension needs to grow as $\frac n2$ in the worst case. This is optimal (up to constant) as $n-1$ dimensions always suffice. 2. Tradeoffs for Dimension vs (Ordinal) Relaxation: We then relax the requirement that every triplet should be exactly preserved and present almost tight lower bounds for the maximum ratio between distances whose relative order was inverted by the embedding; this ratio is known as (ordinal) relaxation in the literature and serves as a counterpart to (metric) distortion. 3. New Bounds on Terminal and Top-$k$-NNs Embeddings: Going beyond triplets, we then study two well-motivated scenarios where we care about preserving specific sets of distances (not necessarily triplets). The first scenario is Terminal Ordinal Embeddings and the second scenario is top-$k$-NNs Ordinal Embeddings. To the best of our knowledge, these are some of the first tradeoffs on triplet-preserving ordinal embeddings and the first study of Terminal and Top-$k$-NNs Ordinal Embeddings.
[ { "created": "Wed, 20 Dec 2023 23:54:18 GMT", "version": "v1" }, { "created": "Fri, 29 Dec 2023 13:47:48 GMT", "version": "v2" } ]
2024-01-01
[ [ "Chatziafratis", "Vaggos", "" ], [ "Indyk", "Piotr", "" ] ]
Metric embeddings traditionally study how to map $n$ items to a target metric space such that distance lengths are not heavily distorted; but what if we only care to preserve the relative order of the distances (and not their length)? In this paper, we are motivated by the following basic question: given triplet comparisons of the form ``item $i$ is closer to item $j$ than to item $k$,'' can we find low-dimensional Euclidean representations for the $n$ items that respect those distance comparisons? Such order-preserving embeddings naturally arise in important applications and have been studied since the 1950s, under the name of ordinal or non-metric embeddings. Our main results are: 1. Nearly-Tight Bounds on Triplet Dimension: We introduce the natural concept of triplet dimension of a dataset, and surprisingly, we show that in order for an ordinal embedding to be triplet-preserving, its dimension needs to grow as $\frac n2$ in the worst case. This is optimal (up to constant) as $n-1$ dimensions always suffice. 2. Tradeoffs for Dimension vs (Ordinal) Relaxation: We then relax the requirement that every triplet should be exactly preserved and present almost tight lower bounds for the maximum ratio between distances whose relative order was inverted by the embedding; this ratio is known as (ordinal) relaxation in the literature and serves as a counterpart to (metric) distortion. 3. New Bounds on Terminal and Top-$k$-NNs Embeddings: Going beyond triplets, we then study two well-motivated scenarios where we care about preserving specific sets of distances (not necessarily triplets). The first scenario is Terminal Ordinal Embeddings and the second scenario is top-$k$-NNs Ordinal Embeddings. To the best of our knowledge, these are some of the first tradeoffs on triplet-preserving ordinal embeddings and the first study of Terminal and Top-$k$-NNs Ordinal Embeddings.
cs/0606123
Atos Alves
Atos Ramos Alves
Use MPLS in Lan's
9 pages, 0 figures tests in laboratory
null
null
null
cs.NI cs.CR
null
To demonstrate the result of researches in laboratory with the focus in exhibiting the real impact of the use of the technology MPLS in LAN. Through these researches we will verify that the investment in this technology is shown, of the point of view cost/benefit, very interesting, being necessary, however, the adoption of another measured, in order to settle down a satisfactory level in the items Quality and safety in the sending of packages in VPN but assisting to the requirement latency of the net very well being shown in the tests that it consumes on average one Tuesday leaves of the time spend for the same function in routing IP.
[ { "created": "Thu, 29 Jun 2006 14:44:02 GMT", "version": "v1" } ]
2007-05-23
[ [ "Alves", "Atos Ramos", "" ] ]
To demonstrate the result of researches in laboratory with the focus in exhibiting the real impact of the use of the technology MPLS in LAN. Through these researches we will verify that the investment in this technology is shown, of the point of view cost/benefit, very interesting, being necessary, however, the adoption of another measured, in order to settle down a satisfactory level in the items Quality and safety in the sending of packages in VPN but assisting to the requirement latency of the net very well being shown in the tests that it consumes on average one Tuesday leaves of the time spend for the same function in routing IP.
2003.10870
Valerio Perrone
Eric Hans Lee, Valerio Perrone, Cedric Archambeau, Matthias Seeger
Cost-aware Bayesian Optimization
null
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Bayesian optimization (BO) is a class of global optimization algorithms, suitable for minimizing an expensive objective function in as few function evaluations as possible. While BO budgets are typically given in iterations, this implicitly measures convergence in terms of iteration count and assumes each evaluation has identical cost. In practice, evaluation costs may vary in different regions of the search space. For example, the cost of neural network training increases quadratically with layer size, which is a typical hyperparameter. Cost-aware BO measures convergence with alternative cost metrics such as time, energy, or money, for which vanilla BO methods are unsuited. We introduce Cost Apportioned BO (CArBO), which attempts to minimize an objective function in as little cost as possible. CArBO combines a cost-effective initial design with a cost-cooled optimization phase which depreciates a learned cost model as iterations proceed. On a set of 20 black-box function optimization problems we show that, given the same cost budget, CArBO finds significantly better hyperparameter configurations than competing methods.
[ { "created": "Sun, 22 Mar 2020 14:51:04 GMT", "version": "v1" } ]
2020-03-25
[ [ "Lee", "Eric Hans", "" ], [ "Perrone", "Valerio", "" ], [ "Archambeau", "Cedric", "" ], [ "Seeger", "Matthias", "" ] ]
Bayesian optimization (BO) is a class of global optimization algorithms, suitable for minimizing an expensive objective function in as few function evaluations as possible. While BO budgets are typically given in iterations, this implicitly measures convergence in terms of iteration count and assumes each evaluation has identical cost. In practice, evaluation costs may vary in different regions of the search space. For example, the cost of neural network training increases quadratically with layer size, which is a typical hyperparameter. Cost-aware BO measures convergence with alternative cost metrics such as time, energy, or money, for which vanilla BO methods are unsuited. We introduce Cost Apportioned BO (CArBO), which attempts to minimize an objective function in as little cost as possible. CArBO combines a cost-effective initial design with a cost-cooled optimization phase which depreciates a learned cost model as iterations proceed. On a set of 20 black-box function optimization problems we show that, given the same cost budget, CArBO finds significantly better hyperparameter configurations than competing methods.
1412.0781
Zhizhen Zhao
Zhizhen Zhao, Yoel Shkolnisky, and Amit Singer
Fast Steerable Principal Component Analysis
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cryo-electron microscopy nowadays often requires the analysis of hundreds of thousands of 2D images as large as a few hundred pixels in each direction. Here we introduce an algorithm that efficiently and accurately performs principal component analysis (PCA) for a large set of two-dimensional images, and, for each image, the set of its uniform rotations in the plane and their reflections. For a dataset consisting of $n$ images of size $L \times L$ pixels, the computational complexity of our algorithm is $O(nL^3 + L^4)$, while existing algorithms take $O(nL^4)$. The new algorithm computes the expansion coefficients of the images in a Fourier-Bessel basis efficiently using the non-uniform fast Fourier transform. We compare the accuracy and efficiency of the new algorithm with traditional PCA and existing algorithms for steerable PCA.
[ { "created": "Tue, 2 Dec 2014 04:24:03 GMT", "version": "v1" }, { "created": "Fri, 12 Dec 2014 18:21:40 GMT", "version": "v2" }, { "created": "Sat, 16 May 2015 02:06:04 GMT", "version": "v3" }, { "created": "Fri, 23 Oct 2015 02:14:53 GMT", "version": "v4" }, { "created": "Tue, 15 Dec 2015 19:26:37 GMT", "version": "v5" } ]
2015-12-16
[ [ "Zhao", "Zhizhen", "" ], [ "Shkolnisky", "Yoel", "" ], [ "Singer", "Amit", "" ] ]
Cryo-electron microscopy nowadays often requires the analysis of hundreds of thousands of 2D images as large as a few hundred pixels in each direction. Here we introduce an algorithm that efficiently and accurately performs principal component analysis (PCA) for a large set of two-dimensional images, and, for each image, the set of its uniform rotations in the plane and their reflections. For a dataset consisting of $n$ images of size $L \times L$ pixels, the computational complexity of our algorithm is $O(nL^3 + L^4)$, while existing algorithms take $O(nL^4)$. The new algorithm computes the expansion coefficients of the images in a Fourier-Bessel basis efficiently using the non-uniform fast Fourier transform. We compare the accuracy and efficiency of the new algorithm with traditional PCA and existing algorithms for steerable PCA.
2007.03659
Shaun Kane
Shaun Kane, Richard Ladner, and Clayton Lewis
Promoting Strategic Research on Inclusive Access to Rich Online Content and Services
A Computing Community Consortium (CCC) workshop report, 16 pages
null
null
ccc2014report_5
cs.CY cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Access to content and services online is increasingly important for everyone, including people with disabilities. National commitments, including the Americans with Disabilities Act, and international resolutions, including the United Nations Declaration of the Rights of Persons with Disabilities, call for work to ensure that people with disabilities can participate fully in the online world. Gains in education, employment and health, as well as in civic engagement, social participation, and personal independence will follow from enhanced inclusion online. Research in many areas of computer science, including recognition technology, natural language processing, personalization, software architecture, and others, is needed to secure these benefits. Organizing this research calls for partnerships among academic researchers, federal agencies, and commercial organizations, as well as effective division of labor and cooperation between computer scientists, behavioral scientists, advocacy groups, and consumers.
[ { "created": "Tue, 7 Jul 2020 17:50:03 GMT", "version": "v1" } ]
2020-07-08
[ [ "Kane", "Shaun", "" ], [ "Ladner", "Richard", "" ], [ "Lewis", "Clayton", "" ] ]
Access to content and services online is increasingly important for everyone, including people with disabilities. National commitments, including the Americans with Disabilities Act, and international resolutions, including the United Nations Declaration of the Rights of Persons with Disabilities, call for work to ensure that people with disabilities can participate fully in the online world. Gains in education, employment and health, as well as in civic engagement, social participation, and personal independence will follow from enhanced inclusion online. Research in many areas of computer science, including recognition technology, natural language processing, personalization, software architecture, and others, is needed to secure these benefits. Organizing this research calls for partnerships among academic researchers, federal agencies, and commercial organizations, as well as effective division of labor and cooperation between computer scientists, behavioral scientists, advocacy groups, and consumers.
2103.11424
Mingjie Luo
Mingjie Luo, Siwei Wang, Xinwang Liu, Wenxuan Tu, Yi Zhang, Xifeng Guo, Sihang Zhou and En Zhu
Deep Distribution-preserving Incomplete Clustering with Optimal Transport
Data are provided at https://github.com/wangsiwei2010/Single-view-incomplete-datasets-for-deep-clustering
null
null
null
cs.CV cs.LG
http://creativecommons.org/licenses/by/4.0/
Clustering is a fundamental task in the computer vision and machine learning community. Although various methods have been proposed, the performance of existing approaches drops dramatically when handling incomplete high-dimensional data (which is common in real world applications). To solve the problem, we propose a novel deep incomplete clustering method, named Deep Distribution-preserving Incomplete Clustering with Optimal Transport (DDIC-OT). To avoid insufficient sample utilization in existing methods limited by few fully-observed samples, we propose to measure distribution distance with the optimal transport for reconstruction evaluation instead of traditional pixel-wise loss function. Moreover, the clustering loss of the latent feature is introduced to regularize the embedding with more discrimination capability. As a consequence, the network becomes more robust against missing features and the unified framework which combines clustering and sample imputation enables the two procedures to negotiate to better serve for each other. Extensive experiments demonstrate that the proposed network achieves superior and stable clustering performance improvement against existing state-of-the-art incomplete clustering methods over different missing ratios.
[ { "created": "Sun, 21 Mar 2021 15:43:17 GMT", "version": "v1" } ]
2021-03-23
[ [ "Luo", "Mingjie", "" ], [ "Wang", "Siwei", "" ], [ "Liu", "Xinwang", "" ], [ "Tu", "Wenxuan", "" ], [ "Zhang", "Yi", "" ], [ "Guo", "Xifeng", "" ], [ "Zhou", "Sihang", "" ], [ "Zhu", "En", "" ] ]
Clustering is a fundamental task in the computer vision and machine learning community. Although various methods have been proposed, the performance of existing approaches drops dramatically when handling incomplete high-dimensional data (which is common in real world applications). To solve the problem, we propose a novel deep incomplete clustering method, named Deep Distribution-preserving Incomplete Clustering with Optimal Transport (DDIC-OT). To avoid insufficient sample utilization in existing methods limited by few fully-observed samples, we propose to measure distribution distance with the optimal transport for reconstruction evaluation instead of traditional pixel-wise loss function. Moreover, the clustering loss of the latent feature is introduced to regularize the embedding with more discrimination capability. As a consequence, the network becomes more robust against missing features and the unified framework which combines clustering and sample imputation enables the two procedures to negotiate to better serve for each other. Extensive experiments demonstrate that the proposed network achieves superior and stable clustering performance improvement against existing state-of-the-art incomplete clustering methods over different missing ratios.
2406.16374
Dongyang Li
Dongyang Li, Taolin Zhang, Longtao Huang, Chengyu Wang, Xiaofeng He, Hui Xue
KEHRL: Learning Knowledge-Enhanced Language Representations with Hierarchical Reinforcement Learning
null
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Knowledge-enhanced pre-trained language models (KEPLMs) leverage relation triples from knowledge graphs (KGs) and integrate these external data sources into language models via self-supervised learning. Previous works treat knowledge enhancement as two independent operations, i.e., knowledge injection and knowledge integration. In this paper, we propose to learn Knowledge-Enhanced language representations with Hierarchical Reinforcement Learning (KEHRL), which jointly addresses the problems of detecting positions for knowledge injection and integrating external knowledge into the model in order to avoid injecting inaccurate or irrelevant knowledge. Specifically, a high-level reinforcement learning (RL) agent utilizes both internal and prior knowledge to iteratively detect essential positions in texts for knowledge injection, which filters out less meaningful entities to avoid diverting the knowledge learning direction. Once the entity positions are selected, a relevant triple filtration module is triggered to perform low-level RL to dynamically refine the triples associated with polysemic entities through binary-valued actions. Experiments validate KEHRL's effectiveness in probing factual knowledge and enhancing the model's performance on various natural language understanding tasks.
[ { "created": "Mon, 24 Jun 2024 07:32:35 GMT", "version": "v1" } ]
2024-06-25
[ [ "Li", "Dongyang", "" ], [ "Zhang", "Taolin", "" ], [ "Huang", "Longtao", "" ], [ "Wang", "Chengyu", "" ], [ "He", "Xiaofeng", "" ], [ "Xue", "Hui", "" ] ]
Knowledge-enhanced pre-trained language models (KEPLMs) leverage relation triples from knowledge graphs (KGs) and integrate these external data sources into language models via self-supervised learning. Previous works treat knowledge enhancement as two independent operations, i.e., knowledge injection and knowledge integration. In this paper, we propose to learn Knowledge-Enhanced language representations with Hierarchical Reinforcement Learning (KEHRL), which jointly addresses the problems of detecting positions for knowledge injection and integrating external knowledge into the model in order to avoid injecting inaccurate or irrelevant knowledge. Specifically, a high-level reinforcement learning (RL) agent utilizes both internal and prior knowledge to iteratively detect essential positions in texts for knowledge injection, which filters out less meaningful entities to avoid diverting the knowledge learning direction. Once the entity positions are selected, a relevant triple filtration module is triggered to perform low-level RL to dynamically refine the triples associated with polysemic entities through binary-valued actions. Experiments validate KEHRL's effectiveness in probing factual knowledge and enhancing the model's performance on various natural language understanding tasks.
1811.02007
Emil Bj\"ornson
Emil Bj\"ornson, Luca Sanguinetti, Jakob Hoydis
Hardware Distortion Correlation Has Negligible Impact on UL Massive MIMO Spectral Efficiency
Published in IEEE Transactions on Communications, 14 pages, 12 figures. This version corrects a typo in Appendix C. arXiv admin note: text overlap with arXiv:1805.07958
null
10.1109/TCOMM.2018.2877331
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper analyzes how the distortion created by hardware impairments in a multiple-antenna base station affects the uplink spectral efficiency (SE), with focus on Massive MIMO. This distortion is correlated across the antennas, but has been often approximated as uncorrelated to facilitate (tractable) SE analysis. To determine when this approximation is accurate, basic properties of distortion correlation are first uncovered. Then, we separately analyze the distortion correlation caused by third-order non-linearities and by quantization. Finally, we study the SE numerically and show that the distortion correlation can be safely neglected in Massive MIMO when there are sufficiently many users. Under i.i.d. Rayleigh fading and equal signal-to-noise ratios (SNRs), this occurs for more than five transmitting users. Other channel models and SNR variations have only minor impact on the accuracy. We also demonstrate the importance of taking the distortion characteristics into account in the receive combining.
[ { "created": "Mon, 5 Nov 2018 19:52:34 GMT", "version": "v1" }, { "created": "Fri, 24 Dec 2021 07:28:40 GMT", "version": "v2" } ]
2021-12-28
[ [ "Björnson", "Emil", "" ], [ "Sanguinetti", "Luca", "" ], [ "Hoydis", "Jakob", "" ] ]
This paper analyzes how the distortion created by hardware impairments in a multiple-antenna base station affects the uplink spectral efficiency (SE), with focus on Massive MIMO. This distortion is correlated across the antennas, but has been often approximated as uncorrelated to facilitate (tractable) SE analysis. To determine when this approximation is accurate, basic properties of distortion correlation are first uncovered. Then, we separately analyze the distortion correlation caused by third-order non-linearities and by quantization. Finally, we study the SE numerically and show that the distortion correlation can be safely neglected in Massive MIMO when there are sufficiently many users. Under i.i.d. Rayleigh fading and equal signal-to-noise ratios (SNRs), this occurs for more than five transmitting users. Other channel models and SNR variations have only minor impact on the accuracy. We also demonstrate the importance of taking the distortion characteristics into account in the receive combining.
2206.01002
Ali Karimi
Ali Karimi and Zahra Mousavi Kouzehkanan and Reshad Hosseini and Hadi Asheri
Introducing One Sided Margin Loss for Solving Classification Problems in Deep Networks
null
null
null
null
cs.LG cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper introduces a new loss function, OSM (One-Sided Margin), to solve maximum-margin classification problems effectively. Unlike the hinge loss, in OSM the margin is explicitly determined with corresponding hyperparameters and then the classification problem is solved. In experiments, we observe that using OSM loss leads to faster training speeds and better accuracies than binary and categorical cross-entropy in several commonly used deep models for classification and optical character recognition problems. OSM has consistently shown better classification accuracies over cross-entropy and hinge losses for small to large neural networks. it has also led to a more efficient training procedure. We achieved state-of-the-art accuracies for small networks on several benchmark datasets of CIFAR10(98.82\%), CIFAR100(91.56\%), Flowers(98.04\%), Stanford Cars(93.91\%) with considerable improvements over other loss functions. Moreover, the accuracies are rather better than cross-entropy and hinge loss for large networks. Therefore, we strongly believe that OSM is a powerful alternative to hinge and cross-entropy losses to train deep neural networks on classification tasks.
[ { "created": "Thu, 2 Jun 2022 12:03:39 GMT", "version": "v1" } ]
2022-06-03
[ [ "Karimi", "Ali", "" ], [ "Kouzehkanan", "Zahra Mousavi", "" ], [ "Hosseini", "Reshad", "" ], [ "Asheri", "Hadi", "" ] ]
This paper introduces a new loss function, OSM (One-Sided Margin), to solve maximum-margin classification problems effectively. Unlike the hinge loss, in OSM the margin is explicitly determined with corresponding hyperparameters and then the classification problem is solved. In experiments, we observe that using OSM loss leads to faster training speeds and better accuracies than binary and categorical cross-entropy in several commonly used deep models for classification and optical character recognition problems. OSM has consistently shown better classification accuracies over cross-entropy and hinge losses for small to large neural networks. it has also led to a more efficient training procedure. We achieved state-of-the-art accuracies for small networks on several benchmark datasets of CIFAR10(98.82\%), CIFAR100(91.56\%), Flowers(98.04\%), Stanford Cars(93.91\%) with considerable improvements over other loss functions. Moreover, the accuracies are rather better than cross-entropy and hinge loss for large networks. Therefore, we strongly believe that OSM is a powerful alternative to hinge and cross-entropy losses to train deep neural networks on classification tasks.
1710.09718
Yuhang Song
Yuhang Song, Christopher Grimm, Xianming Wang, Michael L. Littman
Learning Approximate Stochastic Transition Models
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We examine the problem of learning mappings from state to state, suitable for use in a model-based reinforcement-learning setting, that simultaneously generalize to novel states and can capture stochastic transitions. We show that currently popular generative adversarial networks struggle to learn these stochastic transition models but a modification to their loss functions results in a powerful learning algorithm for this class of problems.
[ { "created": "Thu, 26 Oct 2017 14:06:52 GMT", "version": "v1" } ]
2017-10-27
[ [ "Song", "Yuhang", "" ], [ "Grimm", "Christopher", "" ], [ "Wang", "Xianming", "" ], [ "Littman", "Michael L.", "" ] ]
We examine the problem of learning mappings from state to state, suitable for use in a model-based reinforcement-learning setting, that simultaneously generalize to novel states and can capture stochastic transitions. We show that currently popular generative adversarial networks struggle to learn these stochastic transition models but a modification to their loss functions results in a powerful learning algorithm for this class of problems.
1906.09996
Unai Lopez-Novoa
Unai Lopez-Novoa, Cyril Charron, John Evans, Leandro Beltrachini
The BIDS Toolbox: A web service to manage brain imaging datasets
Paper for the Workshop on Data Preprocessing for Big Biomedical Data 2019, held in conjunction with the IEEE Smart World Congress 2019, Leicester, UK
null
null
null
cs.DL q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Data sharing is a key factor for ensuring reproducibility and transparency of scientific experiments, and neuroimaging is no exception. The vast heterogeneity of data formats and imaging modalities utilised in the field makes it a very challenging problem. In this context, the Brain Imaging Data Structure (BIDS) appears as a solution for organising and describing neuroimaging datasets. Since its publication in 2015, BIDS has gained widespread attention in the field, as it provides a common way to arrange and share multimodal brain images. Although the evident benefits it presents, BIDS has not been widely adopted in the field of MRI yet and we believe that this is due to the lack of a go-to tool to create and managed BIDS datasets. Motivated by this, we present the BIDS Toolbox, a web service to manage brain imaging datasets in BIDS format. Different from other tools, the BIDS Toolbox allows the creation and modification of BIDS-compliant datasets based on MRI data. It provides both a web interface and REST endpoints for its use. In this paper we describe its design and early prototype, and provide a link to the public source code repository.
[ { "created": "Mon, 24 Jun 2019 14:34:38 GMT", "version": "v1" } ]
2019-06-25
[ [ "Lopez-Novoa", "Unai", "" ], [ "Charron", "Cyril", "" ], [ "Evans", "John", "" ], [ "Beltrachini", "Leandro", "" ] ]
Data sharing is a key factor for ensuring reproducibility and transparency of scientific experiments, and neuroimaging is no exception. The vast heterogeneity of data formats and imaging modalities utilised in the field makes it a very challenging problem. In this context, the Brain Imaging Data Structure (BIDS) appears as a solution for organising and describing neuroimaging datasets. Since its publication in 2015, BIDS has gained widespread attention in the field, as it provides a common way to arrange and share multimodal brain images. Although the evident benefits it presents, BIDS has not been widely adopted in the field of MRI yet and we believe that this is due to the lack of a go-to tool to create and managed BIDS datasets. Motivated by this, we present the BIDS Toolbox, a web service to manage brain imaging datasets in BIDS format. Different from other tools, the BIDS Toolbox allows the creation and modification of BIDS-compliant datasets based on MRI data. It provides both a web interface and REST endpoints for its use. In this paper we describe its design and early prototype, and provide a link to the public source code repository.
1003.4074
William Jackson
Shivaji P. Mirashe, N. V. Kalyankar
Cloud Computing
null
Journal of Computing, Volume 2, Issue 3, March 2010
null
null
cs.DC cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Computing as you know it is about to change, your applications and documents are going to move from the desktop into the cloud. I'm talking about cloud computing, where applications and files are hosted on a "cloud" consisting of thousands of computers and servers, all linked together and accessible via the Internet. With cloud computing, everything you do is now web based instead of being desktop based. You can access all your programs and documents from any computer that's connected to the Internet. How will cloud computing change the way you work? For one thing, you're no longer tied to a single computer. You can take your work anywhere because it's always accessible via the web. In addition, cloud computing facilitates group collaboration, as all group members can access the same programs and documents from wherever they happen to be located. Cloud computing might sound far-fetched, but chances are you're already using some cloud applications. If you're using a web-based email program, such as Gmail or Hotmail, you're computing in the cloud. If you're using a web-based application such as Google Calendar or Apple Mobile Me, you're computing in the cloud. If you're using a file- or photo-sharing site, such as Flickr or Picasa Web Albums, you're computing in the cloud. It's the technology of the future, available to use today.
[ { "created": "Mon, 22 Mar 2010 06:16:48 GMT", "version": "v1" } ]
2010-03-23
[ [ "Mirashe", "Shivaji P.", "" ], [ "Kalyankar", "N. V.", "" ] ]
Computing as you know it is about to change, your applications and documents are going to move from the desktop into the cloud. I'm talking about cloud computing, where applications and files are hosted on a "cloud" consisting of thousands of computers and servers, all linked together and accessible via the Internet. With cloud computing, everything you do is now web based instead of being desktop based. You can access all your programs and documents from any computer that's connected to the Internet. How will cloud computing change the way you work? For one thing, you're no longer tied to a single computer. You can take your work anywhere because it's always accessible via the web. In addition, cloud computing facilitates group collaboration, as all group members can access the same programs and documents from wherever they happen to be located. Cloud computing might sound far-fetched, but chances are you're already using some cloud applications. If you're using a web-based email program, such as Gmail or Hotmail, you're computing in the cloud. If you're using a web-based application such as Google Calendar or Apple Mobile Me, you're computing in the cloud. If you're using a file- or photo-sharing site, such as Flickr or Picasa Web Albums, you're computing in the cloud. It's the technology of the future, available to use today.
2204.06046
Bryce Ferguson
Bryce L. Ferguson, Philip N. Brown, Jason R. Marden
Avoiding Unintended Consequences: How Incentives Aid Information Provisioning in Bayesian Congestion Games
null
null
10.1109/CDC51059.2022.9992777
null
cs.GT cs.SY eess.SY
http://creativecommons.org/licenses/by/4.0/
When users lack specific knowledge of various system parameters, their uncertainty may lead them to make undesirable deviations in their decision making. To alleviate this, an informed system operator may elect to signal information to uninformed users with the hope of persuading them to take more preferable actions. In this work, we study public and truthful signalling mechanisms in the context of Bayesian congestion games on parallel networks. We provide bounds on the possible benefit a signalling policy can provide with and without the concurrent use of monetary incentives. We find that though revealing information can reduce system cost in some settings, it can also be detrimental and cause worse performance than not signalling at all. However, by utilizing both signalling and incentive mechanisms, the system operator can guarantee that revealing information does not worsen performance while offering similar opportunities for improvement. These findings emerge from the closed form bounds we derive on the benefit a signalling policy can provide. We provide a numerical example which illustrates the phenomenon that revealing more information can degrade performance when incentives are not used and improves performance when incentives are used.
[ { "created": "Tue, 12 Apr 2022 19:06:24 GMT", "version": "v1" }, { "created": "Thu, 30 Mar 2023 04:02:08 GMT", "version": "v2" } ]
2023-03-31
[ [ "Ferguson", "Bryce L.", "" ], [ "Brown", "Philip N.", "" ], [ "Marden", "Jason R.", "" ] ]
When users lack specific knowledge of various system parameters, their uncertainty may lead them to make undesirable deviations in their decision making. To alleviate this, an informed system operator may elect to signal information to uninformed users with the hope of persuading them to take more preferable actions. In this work, we study public and truthful signalling mechanisms in the context of Bayesian congestion games on parallel networks. We provide bounds on the possible benefit a signalling policy can provide with and without the concurrent use of monetary incentives. We find that though revealing information can reduce system cost in some settings, it can also be detrimental and cause worse performance than not signalling at all. However, by utilizing both signalling and incentive mechanisms, the system operator can guarantee that revealing information does not worsen performance while offering similar opportunities for improvement. These findings emerge from the closed form bounds we derive on the benefit a signalling policy can provide. We provide a numerical example which illustrates the phenomenon that revealing more information can degrade performance when incentives are not used and improves performance when incentives are used.
1807.04561
Fabio Patrizi
Giuseppe De Giacomo, Brian Logan, Paolo Felli, Fabio Patrizi, Sebastian Sardina
Situation Calculus for Synthesis of Manufacturing Controllers
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Manufacturing is transitioning from a mass production model to a manufacturing as a service model in which manufacturing facilities 'bid' to produce products. To decide whether to bid for a complex, previously unseen product, a manufacturing facility must be able to synthesize, 'on the fly', a process plan controller that delegates abstract manufacturing tasks in the supplied process recipe to the appropriate manufacturing resources, e.g., CNC machines, robots etc. Previous work in applying AI behaviour composition to synthesize process plan controllers has considered only finite state ad-hoc representations. Here, we study the problem in the relational setting of the Situation Calculus. By taking advantage of recent work on abstraction in the Situation Calculus, process recipes and available resources are represented by ConGolog programs over, respectively, an abstract and a concrete action theory. This allows us to capture the problem in a formal, general framework, and show decidability for the case of bounded action theories. We also provide techniques for actually synthesizing the controller.
[ { "created": "Thu, 12 Jul 2018 12:05:41 GMT", "version": "v1" } ]
2018-07-13
[ [ "De Giacomo", "Giuseppe", "" ], [ "Logan", "Brian", "" ], [ "Felli", "Paolo", "" ], [ "Patrizi", "Fabio", "" ], [ "Sardina", "Sebastian", "" ] ]
Manufacturing is transitioning from a mass production model to a manufacturing as a service model in which manufacturing facilities 'bid' to produce products. To decide whether to bid for a complex, previously unseen product, a manufacturing facility must be able to synthesize, 'on the fly', a process plan controller that delegates abstract manufacturing tasks in the supplied process recipe to the appropriate manufacturing resources, e.g., CNC machines, robots etc. Previous work in applying AI behaviour composition to synthesize process plan controllers has considered only finite state ad-hoc representations. Here, we study the problem in the relational setting of the Situation Calculus. By taking advantage of recent work on abstraction in the Situation Calculus, process recipes and available resources are represented by ConGolog programs over, respectively, an abstract and a concrete action theory. This allows us to capture the problem in a formal, general framework, and show decidability for the case of bounded action theories. We also provide techniques for actually synthesizing the controller.
2105.11836
Cyrus Vahidi
Cyrus Vahidi, Charalampos Saitis, Gy\"orgy Fazekas
A Modulation Front-End for Music Audio Tagging
null
null
null
null
cs.SD cs.LG eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Convolutional Neural Networks have been extensively explored in the task of automatic music tagging. The problem can be approached by using either engineered time-frequency features or raw audio as input. Modulation filter bank representations that have been actively researched as a basis for timbre perception have the potential to facilitate the extraction of perceptually salient features. We explore end-to-end learned front-ends for audio representation learning, ModNet and SincModNet, that incorporate a temporal modulation processing block. The structure is effectively analogous to a modulation filter bank, where the FIR filter center frequencies are learned in a data-driven manner. The expectation is that a perceptually motivated filter bank can provide a useful representation for identifying music features. Our experimental results provide a fully visualisable and interpretable front-end temporal modulation decomposition of raw audio. We evaluate the performance of our model against the state-of-the-art of music tagging on the MagnaTagATune dataset. We analyse the impact on performance for particular tags when time-frequency bands are subsampled by the modulation filters at a progressively reduced rate. We demonstrate that modulation filtering provides promising results for music tagging and feature representation, without using extensive musical domain knowledge in the design of this front-end.
[ { "created": "Tue, 25 May 2021 11:05:24 GMT", "version": "v1" } ]
2021-05-26
[ [ "Vahidi", "Cyrus", "" ], [ "Saitis", "Charalampos", "" ], [ "Fazekas", "György", "" ] ]
Convolutional Neural Networks have been extensively explored in the task of automatic music tagging. The problem can be approached by using either engineered time-frequency features or raw audio as input. Modulation filter bank representations that have been actively researched as a basis for timbre perception have the potential to facilitate the extraction of perceptually salient features. We explore end-to-end learned front-ends for audio representation learning, ModNet and SincModNet, that incorporate a temporal modulation processing block. The structure is effectively analogous to a modulation filter bank, where the FIR filter center frequencies are learned in a data-driven manner. The expectation is that a perceptually motivated filter bank can provide a useful representation for identifying music features. Our experimental results provide a fully visualisable and interpretable front-end temporal modulation decomposition of raw audio. We evaluate the performance of our model against the state-of-the-art of music tagging on the MagnaTagATune dataset. We analyse the impact on performance for particular tags when time-frequency bands are subsampled by the modulation filters at a progressively reduced rate. We demonstrate that modulation filtering provides promising results for music tagging and feature representation, without using extensive musical domain knowledge in the design of this front-end.
1012.0027
Fen Zhou
Fen Zhou (IRISA), Miklos Molnar (IRISA), Bernard Cousin (IRISA)
Avoidance of multicast incapable branching nodes for multicast routing in WDM networks
null
Photonic Network Communication 18, 3 (2009) 378-392
10.1007/s11107-009-0200-3
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this articlewestudy themulticast routing problem in all-opticalWDMnetworks under the spare light splitting constraint. To implement a multicast session, several light-trees may have to be used due to the limited fanouts of network nodes. Although many multicast routing algorithms have been proposed in order to reduce the total number of wavelength channels used (total cost) for a multicast session, the maximum number of wavelengths required in one fiber link (link stress) and the end-to-end delay are two parameters which are not always taken into consideration. It is known that the shortest path tree (SPT) results in the optimal end-to-end delay, but it can not be employed directly for multicast routing in sparse light splitting WDM networks. Hence, we propose a novel wavelength routing algorithm which tries to avoid the multicast incapable branching nodes (MIBs, branching nodes without splitting capability) in the shortest-path-based multicast tree to diminish the link stress. Good parts of the shortest-path-tree are retained by the algorithm to reduce the end-to-end delay. The algorithm consists of tree steps: (1) aDijkstraPro algorithmwith priority assignment and node adoption is introduced to produce a SPT with up to 38% fewer MIB nodes in the NSF topology and 46% fewerMIB nodes in the USA Longhaul topology, (2) critical articulation and deepest branch heuristics are used to process the MIB nodes, (3) a distance-based light-tree reconnection algorithm is proposed to create the multicast light-trees. Extensive simulations demonstrate the algorithm's efficiency in terms of link stress and end-to-end delay.
[ { "created": "Tue, 30 Nov 2010 21:34:53 GMT", "version": "v1" } ]
2010-12-02
[ [ "Zhou", "Fen", "", "IRISA" ], [ "Molnar", "Miklos", "", "IRISA" ], [ "Cousin", "Bernard", "", "IRISA" ] ]
In this articlewestudy themulticast routing problem in all-opticalWDMnetworks under the spare light splitting constraint. To implement a multicast session, several light-trees may have to be used due to the limited fanouts of network nodes. Although many multicast routing algorithms have been proposed in order to reduce the total number of wavelength channels used (total cost) for a multicast session, the maximum number of wavelengths required in one fiber link (link stress) and the end-to-end delay are two parameters which are not always taken into consideration. It is known that the shortest path tree (SPT) results in the optimal end-to-end delay, but it can not be employed directly for multicast routing in sparse light splitting WDM networks. Hence, we propose a novel wavelength routing algorithm which tries to avoid the multicast incapable branching nodes (MIBs, branching nodes without splitting capability) in the shortest-path-based multicast tree to diminish the link stress. Good parts of the shortest-path-tree are retained by the algorithm to reduce the end-to-end delay. The algorithm consists of tree steps: (1) aDijkstraPro algorithmwith priority assignment and node adoption is introduced to produce a SPT with up to 38% fewer MIB nodes in the NSF topology and 46% fewerMIB nodes in the USA Longhaul topology, (2) critical articulation and deepest branch heuristics are used to process the MIB nodes, (3) a distance-based light-tree reconnection algorithm is proposed to create the multicast light-trees. Extensive simulations demonstrate the algorithm's efficiency in terms of link stress and end-to-end delay.
1110.1360
Aravindan Vijayaraghavan
Aditya Bhaskara, Moses Charikar, Venkatesan Guruswami, Aravindan Vijayaraghavan, Yuan Zhou
Polynomial integrality gaps for strong SDP relaxations of Densest k-subgraph
26 ages, 1 figure. To appear in Symposium on Discrete Algorithms (SODA) 2012
null
null
null
cs.DS cs.CC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The densest k-subgraph (DkS) problem (i.e. find a size k subgraph with maximum number of edges), is one of the notorious problems in approximation algorithms. There is a significant gap between known upper and lower bounds for DkS: the current best algorithm gives an ~ O(n^{1/4}) approximation, while even showing a small constant factor hardness requires significantly stronger assumptions than P != NP. In addition to interest in designing better algorithms, a number of recent results have exploited the conjectured hardness of densest k-subgraph and its variants. Thus, understanding the approximability of DkS is an important challenge. In this work, we give evidence for the hardness of approximating DkS within polynomial factors. Specifically, we expose the limitations of strong semidefinite programs from SDP hierarchies in solving densest k-subgraph. Our results include: * A lower bound of Omega(n^{1/4}/log^3 n) on the integrality gap for Omega(log n/log log n) rounds of the Sherali-Adams relaxation for DkS. This also holds for the relaxation obtained from Sherali-Adams with an added SDP constraint. Our gap instances are in fact Erdos-Renyi random graphs. * For every epsilon > 0, a lower bound of n^{2/53-eps} on the integrality gap of n^{Omega(eps)} rounds of the Lasserre SDP relaxation for DkS, and an n^{Omega_eps(1)} gap for n^{1-eps} rounds. Our construction proceeds via a reduction from random instances of a certain Max-CSP over large domains. In the absence of inapproximability results for DkS, our results show that even the most powerful SDPs are unable to beat a factor of n^{Omega(1)}, and in fact even improving the best known n^{1/4} factor is a barrier for current techniques.
[ { "created": "Thu, 6 Oct 2011 19:29:01 GMT", "version": "v1" } ]
2011-10-07
[ [ "Bhaskara", "Aditya", "" ], [ "Charikar", "Moses", "" ], [ "Guruswami", "Venkatesan", "" ], [ "Vijayaraghavan", "Aravindan", "" ], [ "Zhou", "Yuan", "" ] ]
The densest k-subgraph (DkS) problem (i.e. find a size k subgraph with maximum number of edges), is one of the notorious problems in approximation algorithms. There is a significant gap between known upper and lower bounds for DkS: the current best algorithm gives an ~ O(n^{1/4}) approximation, while even showing a small constant factor hardness requires significantly stronger assumptions than P != NP. In addition to interest in designing better algorithms, a number of recent results have exploited the conjectured hardness of densest k-subgraph and its variants. Thus, understanding the approximability of DkS is an important challenge. In this work, we give evidence for the hardness of approximating DkS within polynomial factors. Specifically, we expose the limitations of strong semidefinite programs from SDP hierarchies in solving densest k-subgraph. Our results include: * A lower bound of Omega(n^{1/4}/log^3 n) on the integrality gap for Omega(log n/log log n) rounds of the Sherali-Adams relaxation for DkS. This also holds for the relaxation obtained from Sherali-Adams with an added SDP constraint. Our gap instances are in fact Erdos-Renyi random graphs. * For every epsilon > 0, a lower bound of n^{2/53-eps} on the integrality gap of n^{Omega(eps)} rounds of the Lasserre SDP relaxation for DkS, and an n^{Omega_eps(1)} gap for n^{1-eps} rounds. Our construction proceeds via a reduction from random instances of a certain Max-CSP over large domains. In the absence of inapproximability results for DkS, our results show that even the most powerful SDPs are unable to beat a factor of n^{Omega(1)}, and in fact even improving the best known n^{1/4} factor is a barrier for current techniques.
2108.09746
Godfred Amankwaa
Godfred Amankwaa, Richard Heeks and Alison L. Browne
Digitalising the Water Sector: Implications for Water Service Management and Governance
In proceedings of the 1st Virtual Conference on Implications of Information and Digital Technologies for Development, 2021
null
null
null
cs.CY
http://creativecommons.org/licenses/by-nc-sa/4.0/
Digital technologies are becoming central to water governance and management, yet their impact and developmental implications are under-researched, particularly in the global South. This paper addresses this knowledge gap by examining the process of water service digitalisation and the resulting effects on service providers. Drawing on qualitative methods, we apply ideas on digitalisation, value, and power to investigate the implementation and impact of digital technologies in Ghana's state water utility company. We find digital water innovations to be recent, and delivering relatively limited impacts as yet, with value mainly accruing at the utility's operational rather than strategic level. The digital technologies present avenues for power shifts and struggles internally and externally as well as some changes in water management structures and responsibilities. We end with a brief discussion on the implications for water service governance and research.
[ { "created": "Sun, 22 Aug 2021 15:05:11 GMT", "version": "v1" } ]
2021-08-24
[ [ "Amankwaa", "Godfred", "" ], [ "Heeks", "Richard", "" ], [ "Browne", "Alison L.", "" ] ]
Digital technologies are becoming central to water governance and management, yet their impact and developmental implications are under-researched, particularly in the global South. This paper addresses this knowledge gap by examining the process of water service digitalisation and the resulting effects on service providers. Drawing on qualitative methods, we apply ideas on digitalisation, value, and power to investigate the implementation and impact of digital technologies in Ghana's state water utility company. We find digital water innovations to be recent, and delivering relatively limited impacts as yet, with value mainly accruing at the utility's operational rather than strategic level. The digital technologies present avenues for power shifts and struggles internally and externally as well as some changes in water management structures and responsibilities. We end with a brief discussion on the implications for water service governance and research.
1812.11284
Zhongang Cai
Zhongang Cai, Cunjun Yu, Quang-Cuong Pham
3D Convolution on RGB-D Point Clouds for Accurate Model-free Object Pose Estimation
null
null
null
null
cs.RO cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The conventional pose estimation of a 3D object usually requires the knowledge of the 3D model of the object. Even with the recent development in convolutional neural networks (CNNs), a 3D model is often necessary in the final estimation. In this paper, we propose a two-stage pipeline that takes in raw colored point cloud data and estimates an object's translation and rotation by running 3D convolutions on voxels. The pipeline is simple yet highly accurate: translation error is reduced to the voxel resolution (around 1 cm) and rotation error is around 5 degrees. The pipeline is also put to actual robotic grasping tests where it achieves above 90% success rate for test objects. Another innovation is that a motion capture system is used to automatically label the point cloud samples which makes it possible to rapidly collect a large amount of highly accurate real data for training the neural networks.
[ { "created": "Sat, 29 Dec 2018 04:46:51 GMT", "version": "v1" } ]
2019-01-01
[ [ "Cai", "Zhongang", "" ], [ "Yu", "Cunjun", "" ], [ "Pham", "Quang-Cuong", "" ] ]
The conventional pose estimation of a 3D object usually requires the knowledge of the 3D model of the object. Even with the recent development in convolutional neural networks (CNNs), a 3D model is often necessary in the final estimation. In this paper, we propose a two-stage pipeline that takes in raw colored point cloud data and estimates an object's translation and rotation by running 3D convolutions on voxels. The pipeline is simple yet highly accurate: translation error is reduced to the voxel resolution (around 1 cm) and rotation error is around 5 degrees. The pipeline is also put to actual robotic grasping tests where it achieves above 90% success rate for test objects. Another innovation is that a motion capture system is used to automatically label the point cloud samples which makes it possible to rapidly collect a large amount of highly accurate real data for training the neural networks.
2102.12519
Diego S. D'antonio
Diego S. D'antonio, Gustavo A. Cardona, and David Salda\~na
The Catenary Robot: Design and Control of a Cable Propelled by Two Quadrotors
Supplementary video: https://youtu.be/RjKkjZuCDV4
null
null
null
cs.RO
http://creativecommons.org/licenses/by-nc-sa/4.0/
Transporting objects using aerial robots has been widely studied in the literature. Still, those approaches always assume that the connection between the quadrotor and the load is made in a previous stage. However, that previous stage usually requires human intervention, and autonomous procedures to locate and attach the object are not considered. Additionally, most of the approaches assume cables as rigid links, but manipulating cables requires considering the state when the cables are hanging. In this work, we design and control a catenary robot. Our robot is able to transport hook-shaped objects in the environment. The robotic system is composed of two quadrotors attached to the two ends of a cable. By defining the catenary curve with five degrees of freedom, position in 3-D, orientation in the z-axis, and span, we can drive the two quadrotors to track a given trajectory. We validate our approach with simulations and real robots. We present four different scenarios of experiments. Our numerical solution is computationally fast and can be executed in real-time.
[ { "created": "Wed, 24 Feb 2021 19:25:37 GMT", "version": "v1" }, { "created": "Tue, 2 Mar 2021 16:56:30 GMT", "version": "v2" } ]
2021-03-03
[ [ "D'antonio", "Diego S.", "" ], [ "Cardona", "Gustavo A.", "" ], [ "Saldaña", "David", "" ] ]
Transporting objects using aerial robots has been widely studied in the literature. Still, those approaches always assume that the connection between the quadrotor and the load is made in a previous stage. However, that previous stage usually requires human intervention, and autonomous procedures to locate and attach the object are not considered. Additionally, most of the approaches assume cables as rigid links, but manipulating cables requires considering the state when the cables are hanging. In this work, we design and control a catenary robot. Our robot is able to transport hook-shaped objects in the environment. The robotic system is composed of two quadrotors attached to the two ends of a cable. By defining the catenary curve with five degrees of freedom, position in 3-D, orientation in the z-axis, and span, we can drive the two quadrotors to track a given trajectory. We validate our approach with simulations and real robots. We present four different scenarios of experiments. Our numerical solution is computationally fast and can be executed in real-time.
2003.02973
Kazuhiro Seki
Kazuhiro Seki and Yusuke Ikuta
S-APIR: News-based Business Sentiment Index
null
null
null
null
cs.CL cs.CY cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper describes our work on developing a new business sentiment index using daily newspaper articles. We adopt a recurrent neural network (RNN) with Gated Recurrent Units to predict the business sentiment of a given text. An RNN is initially trained on Economy Watchers Survey and then fine-tuned on news texts for domain adaptation. Also, a one-class support vector machine is applied to filter out texts deemed irrelevant to business sentiment. Moreover, we propose a simple approach to temporally analyzing how much and when any given factor influences the predicted business sentiment. The validity and utility of the proposed approaches are empirically demonstrated through a series of experiments on Nikkei Newspaper articles published from 2013 to 2018.
[ { "created": "Fri, 6 Mar 2020 00:18:50 GMT", "version": "v1" } ]
2020-03-09
[ [ "Seki", "Kazuhiro", "" ], [ "Ikuta", "Yusuke", "" ] ]
This paper describes our work on developing a new business sentiment index using daily newspaper articles. We adopt a recurrent neural network (RNN) with Gated Recurrent Units to predict the business sentiment of a given text. An RNN is initially trained on Economy Watchers Survey and then fine-tuned on news texts for domain adaptation. Also, a one-class support vector machine is applied to filter out texts deemed irrelevant to business sentiment. Moreover, we propose a simple approach to temporally analyzing how much and when any given factor influences the predicted business sentiment. The validity and utility of the proposed approaches are empirically demonstrated through a series of experiments on Nikkei Newspaper articles published from 2013 to 2018.
2106.11703
Raussen Martin
Martin Raussen
Connectivity of spaces of directed paths in geometric models for concurrent computation
null
Computational Geometry: Theory and Applications 109 (2023) 101942
10.1016/j.comgeo.2022.101942
null
cs.FL math.AT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Higher Dimensional Automata (HDA) are higher dimensional relatives to transition systems in concurrency theory taking into account to which degree various actions commute. Mathematically, they take the form of labelled cubical complexes. It is important to know, and challenging from a geometric/topological perspective, whether the space of directed paths (executions in the model) between two vertices (states) is connected; more generally, to estimate higher connectedness of these path spaces. This paper presents an approach for such an estimation for particularly simple HDA modelling the access of a number of processors to a number of resources with given limited capacity each. It defines a spare capacity for a concurrent program with prescribed periods of access of the processors to the resources. It shows that the connectedness of spaces of directed paths can be estimated (from above) by spare capacities. Moreover, spare capacities can also be used to detect deadlocks and critical states in such a HDA. The key theoretical ingredient is a transition from the calculation of local connectedness bounds (of the upper links of vertices of an HDA) to global ones by applying a version of the nerve lemma due to Anders Bj\"orner.
[ { "created": "Tue, 22 Jun 2021 12:18:49 GMT", "version": "v1" }, { "created": "Tue, 5 Apr 2022 09:09:24 GMT", "version": "v2" }, { "created": "Tue, 6 Sep 2022 13:30:27 GMT", "version": "v3" } ]
2022-09-07
[ [ "Raussen", "Martin", "" ] ]
Higher Dimensional Automata (HDA) are higher dimensional relatives to transition systems in concurrency theory taking into account to which degree various actions commute. Mathematically, they take the form of labelled cubical complexes. It is important to know, and challenging from a geometric/topological perspective, whether the space of directed paths (executions in the model) between two vertices (states) is connected; more generally, to estimate higher connectedness of these path spaces. This paper presents an approach for such an estimation for particularly simple HDA modelling the access of a number of processors to a number of resources with given limited capacity each. It defines a spare capacity for a concurrent program with prescribed periods of access of the processors to the resources. It shows that the connectedness of spaces of directed paths can be estimated (from above) by spare capacities. Moreover, spare capacities can also be used to detect deadlocks and critical states in such a HDA. The key theoretical ingredient is a transition from the calculation of local connectedness bounds (of the upper links of vertices of an HDA) to global ones by applying a version of the nerve lemma due to Anders Bj\"orner.
2310.14230
Haoran Wang
Haoran Wang, Qiuye Jin, Shiman Li, Siyu Liu, Manning Wang, Zhijian Song
A comprehensive survey on deep active learning in medical image analysis
More papers and contents of medical image analysis & performance analysis on medical imaging datasets with experiments
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Deep learning has achieved widespread success in medical image analysis, leading to an increasing demand for large-scale expert-annotated medical image datasets. Yet, the high cost of annotating medical images severely hampers the development of deep learning in this field. To reduce annotation costs, active learning aims to select the most informative samples for annotation and train high-performance models with as few labeled samples as possible. In this survey, we review the core methods of active learning, including the evaluation of informativeness and sampling strategy. For the first time, we provide a detailed summary of the integration of active learning with other label-efficient techniques, such as semi-supervised, self-supervised learning, and so on. We also summarize active learning works that are specifically tailored to medical image analysis. Additionally, we conduct a thorough comparative analysis of the performance of different AL methods in medical image analysis with experiments. In the end, we offer our perspectives on the future trends and challenges of active learning and its applications in medical image analysis.
[ { "created": "Sun, 22 Oct 2023 08:46:40 GMT", "version": "v1" }, { "created": "Tue, 24 Oct 2023 01:36:19 GMT", "version": "v2" }, { "created": "Wed, 13 Mar 2024 09:23:10 GMT", "version": "v3" } ]
2024-03-14
[ [ "Wang", "Haoran", "" ], [ "Jin", "Qiuye", "" ], [ "Li", "Shiman", "" ], [ "Liu", "Siyu", "" ], [ "Wang", "Manning", "" ], [ "Song", "Zhijian", "" ] ]
Deep learning has achieved widespread success in medical image analysis, leading to an increasing demand for large-scale expert-annotated medical image datasets. Yet, the high cost of annotating medical images severely hampers the development of deep learning in this field. To reduce annotation costs, active learning aims to select the most informative samples for annotation and train high-performance models with as few labeled samples as possible. In this survey, we review the core methods of active learning, including the evaluation of informativeness and sampling strategy. For the first time, we provide a detailed summary of the integration of active learning with other label-efficient techniques, such as semi-supervised, self-supervised learning, and so on. We also summarize active learning works that are specifically tailored to medical image analysis. Additionally, we conduct a thorough comparative analysis of the performance of different AL methods in medical image analysis with experiments. In the end, we offer our perspectives on the future trends and challenges of active learning and its applications in medical image analysis.
1004.4614
Ashley Smith
Vitthal J. Gond and Aditya Goel
Performance Evaluation of Wavelength Routed Optical Network with Wavelength Conversion
Vitthal J. Gond and Aditya Goel, "Performance Evaluation of Wavelength Routed Optical Network with Wavelength Conversion", Journal of Telecommunications, Volume 2, Issue 1, p110-114, April 2010
Journal of Telecommunications, Volume 2, Issue 1, p110-114, April 2010
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The rapid development of telecommunication networks is driven by user demands for new applications and advances in technologies. The explosive growth of the internet traffic is due to its use for collecting the information, communication, multimedia application, entertainment, etc. These applications are imposing a tremendous demand for bandwidth capacity on telecommunication network. The introduction of fiber optics had proved to meet the huge demand of bandwidth. These requirement can be meet by all optical network which is capable of transmitting enormous data at very high speed, around 50 Tera bits per seconds (Tbps) A wavelength conversion technique is addressed in this paper to reduced the blocking probability in wavelength routed networks. It is seen that the blocking probability of traffic requests decreases as the wavelength conversion factor increases. We explode the possibility for network with different size with variation in wavelength per link. In this work the evaluation of wavelength routed optical network with varying number of wavelength converters, different traffic types are carried out and results are shown that the blocking probability is minimum with 50% to 60% wavelength convertible nodes. Wavelength convertible nodes more than 60% are not showing much effect on reduction in blocking probability rather it results in increase in overall cost of network.
[ { "created": "Mon, 26 Apr 2010 19:29:59 GMT", "version": "v1" } ]
2010-04-27
[ [ "Gond", "Vitthal J.", "" ], [ "Goel", "Aditya", "" ] ]
The rapid development of telecommunication networks is driven by user demands for new applications and advances in technologies. The explosive growth of the internet traffic is due to its use for collecting the information, communication, multimedia application, entertainment, etc. These applications are imposing a tremendous demand for bandwidth capacity on telecommunication network. The introduction of fiber optics had proved to meet the huge demand of bandwidth. These requirement can be meet by all optical network which is capable of transmitting enormous data at very high speed, around 50 Tera bits per seconds (Tbps) A wavelength conversion technique is addressed in this paper to reduced the blocking probability in wavelength routed networks. It is seen that the blocking probability of traffic requests decreases as the wavelength conversion factor increases. We explode the possibility for network with different size with variation in wavelength per link. In this work the evaluation of wavelength routed optical network with varying number of wavelength converters, different traffic types are carried out and results are shown that the blocking probability is minimum with 50% to 60% wavelength convertible nodes. Wavelength convertible nodes more than 60% are not showing much effect on reduction in blocking probability rather it results in increase in overall cost of network.
1906.05986
Omar Alonso
Omar Alonso, Vasileios Kandylas, Serge-Eric Tremblay
Scalable Knowledge Graph Construction from Twitter
null
null
null
null
cs.IR cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We describe a knowledge graph derived from Twitter data with the goal of discovering relationships between people, links, and topics. The goal is to filter out noise from Twitter and surface an inside-out view that relies on high quality content. The generated graph contains many relationships where the user can query and traverse the structure from different angles allowing the development of new applications.
[ { "created": "Fri, 14 Jun 2019 02:28:55 GMT", "version": "v1" } ]
2019-06-17
[ [ "Alonso", "Omar", "" ], [ "Kandylas", "Vasileios", "" ], [ "Tremblay", "Serge-Eric", "" ] ]
We describe a knowledge graph derived from Twitter data with the goal of discovering relationships between people, links, and topics. The goal is to filter out noise from Twitter and surface an inside-out view that relies on high quality content. The generated graph contains many relationships where the user can query and traverse the structure from different angles allowing the development of new applications.
1507.07348
Andreas Schwarz
Sam Nees, Andreas Schwarz, Walter Kellermann
A model for the temporal evolution of the spatial coherence in decaying reverberant sound fields
Accepted for JASA Express Letters
null
10.1121/1.4929733
null
cs.SD
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Reverberant sound fields are often modeled as isotropic. However, it has been observed that spatial properties change during the decay of the sound field energy, due to non-isotropic attenuation in non-ideal rooms. In this letter, a model for the spatial coherence between two sensors in a decaying reverberant sound field is developed for rectangular rooms. The modeled coherence function depends on room dimensions, surface reflectivity and orientation of the sensor pair, but is independent of the position of source and sensors in the room. The model includes the spherically isotropic (diffuse) and cylindrically isotropic sound field models as special cases.
[ { "created": "Mon, 27 Jul 2015 10:08:24 GMT", "version": "v1" } ]
2015-08-26
[ [ "Nees", "Sam", "" ], [ "Schwarz", "Andreas", "" ], [ "Kellermann", "Walter", "" ] ]
Reverberant sound fields are often modeled as isotropic. However, it has been observed that spatial properties change during the decay of the sound field energy, due to non-isotropic attenuation in non-ideal rooms. In this letter, a model for the spatial coherence between two sensors in a decaying reverberant sound field is developed for rectangular rooms. The modeled coherence function depends on room dimensions, surface reflectivity and orientation of the sensor pair, but is independent of the position of source and sensors in the room. The model includes the spherically isotropic (diffuse) and cylindrically isotropic sound field models as special cases.
2106.04565
Juri Opitz
Sarah Uhrig, Yoalli Rezepka Garcia, Juri Opitz, Anette Frank
Translate, then Parse! A strong baseline for Cross-Lingual AMR Parsing
IWPT 2021
null
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In cross-lingual Abstract Meaning Representation (AMR) parsing, researchers develop models that project sentences from various languages onto their AMRs to capture their essential semantic structures: given a sentence in any language, we aim to capture its core semantic content through concepts connected by manifold types of semantic relations. Methods typically leverage large silver training data to learn a single model that is able to project non-English sentences to AMRs. However, we find that a simple baseline tends to be over-looked: translating the sentences to English and projecting their AMR with a monolingual AMR parser (translate+parse,T+P). In this paper, we revisit this simple two-step base-line, and enhance it with a strong NMT system and a strong AMR parser. Our experiments show that T+P outperforms a recent state-of-the-art system across all tested languages: German, Italian, Spanish and Mandarin with +14.6, +12.6, +14.3 and +16.0 Smatch points.
[ { "created": "Tue, 8 Jun 2021 17:52:48 GMT", "version": "v1" } ]
2021-06-09
[ [ "Uhrig", "Sarah", "" ], [ "Garcia", "Yoalli Rezepka", "" ], [ "Opitz", "Juri", "" ], [ "Frank", "Anette", "" ] ]
In cross-lingual Abstract Meaning Representation (AMR) parsing, researchers develop models that project sentences from various languages onto their AMRs to capture their essential semantic structures: given a sentence in any language, we aim to capture its core semantic content through concepts connected by manifold types of semantic relations. Methods typically leverage large silver training data to learn a single model that is able to project non-English sentences to AMRs. However, we find that a simple baseline tends to be over-looked: translating the sentences to English and projecting their AMR with a monolingual AMR parser (translate+parse,T+P). In this paper, we revisit this simple two-step base-line, and enhance it with a strong NMT system and a strong AMR parser. Our experiments show that T+P outperforms a recent state-of-the-art system across all tested languages: German, Italian, Spanish and Mandarin with +14.6, +12.6, +14.3 and +16.0 Smatch points.
2211.14823
Bowen Cai
Bowen Cai, Yujie Li, Yuqin Liang, Rongfei Jia, Binqiang Zhao, Mingming Gong, and Huan Fu
3D Scene Creation and Rendering via Rough Meshes: A Lighting Transfer Avenue
Accepted by IEEE Transactions on Pattern Analysis and Machine Intelligence (T-PAMI), project page: http://3d-front-future.github.io/LighTNet
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper studies how to flexibly integrate reconstructed 3D models into practical 3D modeling pipelines such as 3D scene creation and rendering. Due to the technical difficulty, one can only obtain rough 3D models (R3DMs) for most real objects using existing 3D reconstruction techniques. As a result, physically-based rendering (PBR) would render low-quality images or videos for scenes that are constructed by R3DMs. One promising solution would be representing real-world objects as Neural Fields such as NeRFs, which are able to generate photo-realistic renderings of an object under desired viewpoints. However, a drawback is that the synthesized views through Neural Fields Rendering (NFR) cannot reflect the simulated lighting details on R3DMs in PBR pipelines, especially when object interactions in the 3D scene creation cause local shadows. To solve this dilemma, we propose a lighting transfer network (LighTNet) to bridge NFR and PBR, such that they can benefit from each other. LighTNet reasons about a simplified image composition model, remedies the uneven surface issue caused by R3DMs, and is empowered by several perceptual-motivated constraints and a new Lab angle loss which enhances the contrast between lighting strength and colors. Comparisons demonstrate that LighTNet is superior in synthesizing impressive lighting, and is promising in pushing NFR further in practical 3D modeling workflows.
[ { "created": "Sun, 27 Nov 2022 13:31:00 GMT", "version": "v1" }, { "created": "Sun, 4 Dec 2022 07:13:01 GMT", "version": "v2" }, { "created": "Tue, 19 Mar 2024 15:02:04 GMT", "version": "v3" } ]
2024-03-20
[ [ "Cai", "Bowen", "" ], [ "Li", "Yujie", "" ], [ "Liang", "Yuqin", "" ], [ "Jia", "Rongfei", "" ], [ "Zhao", "Binqiang", "" ], [ "Gong", "Mingming", "" ], [ "Fu", "Huan", "" ] ]
This paper studies how to flexibly integrate reconstructed 3D models into practical 3D modeling pipelines such as 3D scene creation and rendering. Due to the technical difficulty, one can only obtain rough 3D models (R3DMs) for most real objects using existing 3D reconstruction techniques. As a result, physically-based rendering (PBR) would render low-quality images or videos for scenes that are constructed by R3DMs. One promising solution would be representing real-world objects as Neural Fields such as NeRFs, which are able to generate photo-realistic renderings of an object under desired viewpoints. However, a drawback is that the synthesized views through Neural Fields Rendering (NFR) cannot reflect the simulated lighting details on R3DMs in PBR pipelines, especially when object interactions in the 3D scene creation cause local shadows. To solve this dilemma, we propose a lighting transfer network (LighTNet) to bridge NFR and PBR, such that they can benefit from each other. LighTNet reasons about a simplified image composition model, remedies the uneven surface issue caused by R3DMs, and is empowered by several perceptual-motivated constraints and a new Lab angle loss which enhances the contrast between lighting strength and colors. Comparisons demonstrate that LighTNet is superior in synthesizing impressive lighting, and is promising in pushing NFR further in practical 3D modeling workflows.
1105.4301
Neil J. Gunther
Neil J. Gunther, Shanti Subramanyam, Stefan Parvu
A Methodology for Optimizing Multithreaded System Scalability on Multi-cores
21 pages, 11 figures. To appear in "Programming Multi-core and Many-core Computing Systems," eds. S. Pllana and F. Xhafa, Wiley Series on Parallel and Distributed Computing
null
null
null
cs.DC cs.PF
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We show how to quantify scalability with the Universal Scalability Law (USL) by applying it to performance measurements of memcached, J2EE, and Weblogic on multi-core platforms. Since commercial multicores are essentially black-boxes, the accessible performance gains are primarily available at the application level. We also demonstrate how our methodology can identify the most significant performance tuning opportunities to optimize application scalability, as well as providing an easy means for exploring other aspects of the multi-core system design space.
[ { "created": "Sun, 22 May 2011 01:12:11 GMT", "version": "v1" } ]
2011-05-24
[ [ "Gunther", "Neil J.", "" ], [ "Subramanyam", "Shanti", "" ], [ "Parvu", "Stefan", "" ] ]
We show how to quantify scalability with the Universal Scalability Law (USL) by applying it to performance measurements of memcached, J2EE, and Weblogic on multi-core platforms. Since commercial multicores are essentially black-boxes, the accessible performance gains are primarily available at the application level. We also demonstrate how our methodology can identify the most significant performance tuning opportunities to optimize application scalability, as well as providing an easy means for exploring other aspects of the multi-core system design space.
1705.00571
Andrew Moore
Andrew Moore, Paul Rayson
Lancaster A at SemEval-2017 Task 5: Evaluation metrics matter: predicting sentiment from financial news headlines
5 pages, to Appear in the Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval 2017), August 2017, Vancouver, BC
null
10.18653/v1/S17-2095
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
This paper describes our participation in Task 5 track 2 of SemEval 2017 to predict the sentiment of financial news headlines for a specific company on a continuous scale between -1 and 1. We tackled the problem using a number of approaches, utilising a Support Vector Regression (SVR) and a Bidirectional Long Short-Term Memory (BLSTM). We found an improvement of 4-6% using the LSTM model over the SVR and came fourth in the track. We report a number of different evaluations using a finance specific word embedding model and reflect on the effects of using different evaluation metrics.
[ { "created": "Mon, 1 May 2017 15:57:41 GMT", "version": "v1" } ]
2018-06-15
[ [ "Moore", "Andrew", "" ], [ "Rayson", "Paul", "" ] ]
This paper describes our participation in Task 5 track 2 of SemEval 2017 to predict the sentiment of financial news headlines for a specific company on a continuous scale between -1 and 1. We tackled the problem using a number of approaches, utilising a Support Vector Regression (SVR) and a Bidirectional Long Short-Term Memory (BLSTM). We found an improvement of 4-6% using the LSTM model over the SVR and came fourth in the track. We report a number of different evaluations using a finance specific word embedding model and reflect on the effects of using different evaluation metrics.
1209.1198
Chong-Dao Lee
Yaotsu Chang, Chong-Dao Lee, Keqin Feng
Multivariate Interpolation Formula over Finite Fields and Its Applications in Coding Theory
11 pages. This work is supported by the grant of the NSFC no.10990011 and the Tsinghua National Lab. of Information Science and Technology
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A multivariate interpolation formula (MVIF) over finite fields is presented by using the proposed Kronecker delta function. The MVIF can be applied to yield polynomial relations over the base field among homogeneous symmetric rational functions. Besides the property that all the coefficients are coming from the base field, there is also a significant one on the degrees of the obtained polynomial; namely, the degree of each term satisfies certain condition. Next, for any cyclic codes the unknown syndrome representation can also be provided by the proposed MVIF and also has the same properties. By applying the unknown syndrome representation and the Berlekamp-Massey algorithm, one-step decoding algorithms can be developed to determine the error locator polynomials for arbitrary cyclic codes.
[ { "created": "Thu, 6 Sep 2012 07:08:08 GMT", "version": "v1" }, { "created": "Thu, 20 Dec 2012 15:05:55 GMT", "version": "v2" } ]
2012-12-21
[ [ "Chang", "Yaotsu", "" ], [ "Lee", "Chong-Dao", "" ], [ "Feng", "Keqin", "" ] ]
A multivariate interpolation formula (MVIF) over finite fields is presented by using the proposed Kronecker delta function. The MVIF can be applied to yield polynomial relations over the base field among homogeneous symmetric rational functions. Besides the property that all the coefficients are coming from the base field, there is also a significant one on the degrees of the obtained polynomial; namely, the degree of each term satisfies certain condition. Next, for any cyclic codes the unknown syndrome representation can also be provided by the proposed MVIF and also has the same properties. By applying the unknown syndrome representation and the Berlekamp-Massey algorithm, one-step decoding algorithms can be developed to determine the error locator polynomials for arbitrary cyclic codes.
2407.10332
Ryan Hare
Ryan Hare and Ying Tang
Ontology-driven Reinforcement Learning for Personalized Student Support
6 pages, 3 figures, in press for IEEE Systems, Man, and Cybernetics 2024 Conference
null
null
null
cs.CY cs.LG cs.MA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the search for more effective education, there is a widespread effort to develop better approaches to personalize student education. Unassisted, educators often do not have time or resources to personally support every student in a given classroom. Motivated by this issue, and by recent advancements in artificial intelligence, this paper presents a general-purpose framework for personalized student support, applicable to any virtual educational system such as a serious game or an intelligent tutoring system. To fit any educational situation, we apply ontologies for their semantic organization, combining them with data collection considerations and multi-agent reinforcement learning. The result is a modular system that can be adapted to any virtual educational software to provide useful personalized assistance to students.
[ { "created": "Sun, 14 Jul 2024 21:11:44 GMT", "version": "v1" } ]
2024-07-16
[ [ "Hare", "Ryan", "" ], [ "Tang", "Ying", "" ] ]
In the search for more effective education, there is a widespread effort to develop better approaches to personalize student education. Unassisted, educators often do not have time or resources to personally support every student in a given classroom. Motivated by this issue, and by recent advancements in artificial intelligence, this paper presents a general-purpose framework for personalized student support, applicable to any virtual educational system such as a serious game or an intelligent tutoring system. To fit any educational situation, we apply ontologies for their semantic organization, combining them with data collection considerations and multi-agent reinforcement learning. The result is a modular system that can be adapted to any virtual educational software to provide useful personalized assistance to students.
2107.02777
W. Damayantha Kularatne
W. D. Kularatne, Lasanthika H. Dissawa, T.M.S.S.K. Ekanayake, Janaka B. Ekanayake
Developing and delivering a remote experiment based on the experiential learning framework during COVID-19 pandemic
15 pages
null
null
null
cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The students following Engineering disciplines should not only acquire the conceptual understanding of the concepts but also the processors and attitudes. There are two recognizable learning environments for students, namely, classroom environment and laboratory environment. With the COVID-19 pandemic, both environments merged to online environments, impacting students' development of processes and characteristic attitudes. This paper introduces a theoretical framework based on experiential learning to plan and deliver processes through an online environment. A case study based on the power factor correction experiment was presented. The traditional experiment that runs for 3 hours was broken into smaller tasks such as a pre-lab activity, a simulation exercise, a PowerPoint presentation, a remote laboratory activity, and a final report based on the experiential learning approach. A questionnaire that carries close and open-ended questions were administered to obtain students' reflections about developing the processes through an online-friendly experiential learning approach. The majority of the students like the approach followed and praise for providing them with an opportunity to perform the experiment in a novel way during the COVID-19 situation.
[ { "created": "Tue, 6 Jul 2021 17:39:48 GMT", "version": "v1" } ]
2021-07-07
[ [ "Kularatne", "W. D.", "" ], [ "Dissawa", "Lasanthika H.", "" ], [ "Ekanayake", "T. M. S. S. K.", "" ], [ "Ekanayake", "Janaka B.", "" ] ]
The students following Engineering disciplines should not only acquire the conceptual understanding of the concepts but also the processors and attitudes. There are two recognizable learning environments for students, namely, classroom environment and laboratory environment. With the COVID-19 pandemic, both environments merged to online environments, impacting students' development of processes and characteristic attitudes. This paper introduces a theoretical framework based on experiential learning to plan and deliver processes through an online environment. A case study based on the power factor correction experiment was presented. The traditional experiment that runs for 3 hours was broken into smaller tasks such as a pre-lab activity, a simulation exercise, a PowerPoint presentation, a remote laboratory activity, and a final report based on the experiential learning approach. A questionnaire that carries close and open-ended questions were administered to obtain students' reflections about developing the processes through an online-friendly experiential learning approach. The majority of the students like the approach followed and praise for providing them with an opportunity to perform the experiment in a novel way during the COVID-19 situation.
2110.02861
Tim Dettmers
Tim Dettmers, Mike Lewis, Sam Shleifer, Luke Zettlemoyer
8-bit Optimizers via Block-wise Quantization
ICLR2022 spotlight version
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
Stateful optimizers maintain gradient statistics over time, e.g., the exponentially smoothed sum (SGD with momentum) or squared sum (Adam) of past gradient values. This state can be used to accelerate optimization compared to plain stochastic gradient descent but uses memory that might otherwise be allocated to model parameters, thereby limiting the maximum size of models trained in practice. In this paper, we develop the first optimizers that use 8-bit statistics while maintaining the performance levels of using 32-bit optimizer states. To overcome the resulting computational, quantization, and stability challenges, we develop block-wise dynamic quantization. Block-wise quantization divides input tensors into smaller blocks that are independently quantized. Each block is processed in parallel across cores, yielding faster optimization and high precision quantization. To maintain stability and performance, we combine block-wise quantization with two additional changes: (1) dynamic quantization, a form of non-linear optimization that is precise for both large and small magnitude values, and (2) a stable embedding layer to reduce gradient variance that comes from the highly non-uniform distribution of input tokens in language models. As a result, our 8-bit optimizers maintain 32-bit performance with a small fraction of the memory footprint on a range of tasks, including 1.5B parameter language modeling, GLUE finetuning, ImageNet classification, WMT'14 machine translation, MoCo v2 contrastive ImageNet pretraining+finetuning, and RoBERTa pretraining, without changes to the original optimizer hyperparameters. We open-source our 8-bit optimizers as a drop-in replacement that only requires a two-line code change.
[ { "created": "Wed, 6 Oct 2021 15:43:20 GMT", "version": "v1" }, { "created": "Mon, 20 Jun 2022 16:05:15 GMT", "version": "v2" } ]
2022-06-22
[ [ "Dettmers", "Tim", "" ], [ "Lewis", "Mike", "" ], [ "Shleifer", "Sam", "" ], [ "Zettlemoyer", "Luke", "" ] ]
Stateful optimizers maintain gradient statistics over time, e.g., the exponentially smoothed sum (SGD with momentum) or squared sum (Adam) of past gradient values. This state can be used to accelerate optimization compared to plain stochastic gradient descent but uses memory that might otherwise be allocated to model parameters, thereby limiting the maximum size of models trained in practice. In this paper, we develop the first optimizers that use 8-bit statistics while maintaining the performance levels of using 32-bit optimizer states. To overcome the resulting computational, quantization, and stability challenges, we develop block-wise dynamic quantization. Block-wise quantization divides input tensors into smaller blocks that are independently quantized. Each block is processed in parallel across cores, yielding faster optimization and high precision quantization. To maintain stability and performance, we combine block-wise quantization with two additional changes: (1) dynamic quantization, a form of non-linear optimization that is precise for both large and small magnitude values, and (2) a stable embedding layer to reduce gradient variance that comes from the highly non-uniform distribution of input tokens in language models. As a result, our 8-bit optimizers maintain 32-bit performance with a small fraction of the memory footprint on a range of tasks, including 1.5B parameter language modeling, GLUE finetuning, ImageNet classification, WMT'14 machine translation, MoCo v2 contrastive ImageNet pretraining+finetuning, and RoBERTa pretraining, without changes to the original optimizer hyperparameters. We open-source our 8-bit optimizers as a drop-in replacement that only requires a two-line code change.
1909.08048
Rasheed Hussain
Fatima Hussain and Rasheed Hussain and Brett Noye and Salah Sharieh
Enterprise API Security and GDPR Compliance: Design and Implementation Perspective
7 pages
null
null
null
cs.CR cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the advancements in the enterprise-level business development, the demand for new applications and services is overwhelming. For the development and delivery of such applications and services, enterprise businesses rely on Application Programming Interfaces (APIs). In essence, API is a double-edged sword. On one hand, API provides ease of expanding the business through sharing value and utility, but on another hand it raises security and privacy issues. Since the applications usually use APIs to retrieve important data, therefore it is extremely important to make sure that an effective access control and security mechanism are in place , and the data does not fall into wrong hands. In this article, we discuss the current state of the enterprise API security and the role of Machine Learning (ML) in API security. We also discuss the General Data Protection Regulation (GDPR) compliance and its effect on the API security.
[ { "created": "Tue, 17 Sep 2019 19:36:12 GMT", "version": "v1" } ]
2019-09-19
[ [ "Hussain", "Fatima", "" ], [ "Hussain", "Rasheed", "" ], [ "Noye", "Brett", "" ], [ "Sharieh", "Salah", "" ] ]
With the advancements in the enterprise-level business development, the demand for new applications and services is overwhelming. For the development and delivery of such applications and services, enterprise businesses rely on Application Programming Interfaces (APIs). In essence, API is a double-edged sword. On one hand, API provides ease of expanding the business through sharing value and utility, but on another hand it raises security and privacy issues. Since the applications usually use APIs to retrieve important data, therefore it is extremely important to make sure that an effective access control and security mechanism are in place , and the data does not fall into wrong hands. In this article, we discuss the current state of the enterprise API security and the role of Machine Learning (ML) in API security. We also discuss the General Data Protection Regulation (GDPR) compliance and its effect on the API security.