id
stringlengths
9
10
submitter
stringlengths
1
64
authors
stringlengths
4
20.7k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
404
doi
stringlengths
11
153
report-no
stringlengths
2
254
categories
stringlengths
5
98
license
stringclasses
9 values
orig_abstract
stringlengths
14
3.35k
versions
listlengths
1
60
update_date
stringlengths
10
10
authors_parsed
listlengths
1
1.35k
abstract
stringlengths
11
3.34k
2006.15337
Khaled Elbassioni
Khaled Elbassioni
On Dualization over Distributive Lattices
null
Discrete Mathematics & Theoretical Computer Science, vol. 24, no 2, Discrete Algorithms (October 27, 2022) dmtcs:6742
10.46298/dmtcs.6742
null
cs.DM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Given a partially order set (poset) $P$, and a pair of families of ideals $\mathcal{I}$ and filters $\mathcal{F}$ in $P$ such that each pair $(I,F)\in \mathcal{I}\times\mathcal{F}$ has a non-empty intersection, the dualization problem over $P$ is to check whether there is an ideal $X$ in $P$ which intersects every member of $\mathcal{F}$ and does not contain any member of $\mathcal{I}$. Equivalently, the problem is to check for a distributive lattice $L=L(P)$, given by the poset $P$ of its set of joint-irreducibles, and two given antichains $\mathcal{A},\mathcal{B}\subseteq L$ such that no $a\in\mathcal{A}$ is dominated by any $b\in\mathcal{B}$, whether $\mathcal{A}$ and $\mathcal{B}$ cover (by domination) the entire lattice. We show that the problem can be solved in quasi-polynomial time in the sizes of $P$, $\mathcal{A}$ and $\mathcal{B}$, thus answering an open question in Babin and Kuznetsov (2017). As an application, we show that minimal infrequent closed sets of attributes in a rational database, with respect to a given implication base of maximum premise size of one, can be enumerated in incremental quasi-polynomial time.
[ { "created": "Sat, 27 Jun 2020 10:43:11 GMT", "version": "v1" }, { "created": "Tue, 15 Jun 2021 21:58:41 GMT", "version": "v2" }, { "created": "Tue, 19 Jul 2022 17:11:41 GMT", "version": "v3" }, { "created": "Fri, 21 Oct 2022 13:11:03 GMT", "version": "v4" } ]
2023-06-22
[ [ "Elbassioni", "Khaled", "" ] ]
Given a partially order set (poset) $P$, and a pair of families of ideals $\mathcal{I}$ and filters $\mathcal{F}$ in $P$ such that each pair $(I,F)\in \mathcal{I}\times\mathcal{F}$ has a non-empty intersection, the dualization problem over $P$ is to check whether there is an ideal $X$ in $P$ which intersects every member of $\mathcal{F}$ and does not contain any member of $\mathcal{I}$. Equivalently, the problem is to check for a distributive lattice $L=L(P)$, given by the poset $P$ of its set of joint-irreducibles, and two given antichains $\mathcal{A},\mathcal{B}\subseteq L$ such that no $a\in\mathcal{A}$ is dominated by any $b\in\mathcal{B}$, whether $\mathcal{A}$ and $\mathcal{B}$ cover (by domination) the entire lattice. We show that the problem can be solved in quasi-polynomial time in the sizes of $P$, $\mathcal{A}$ and $\mathcal{B}$, thus answering an open question in Babin and Kuznetsov (2017). As an application, we show that minimal infrequent closed sets of attributes in a rational database, with respect to a given implication base of maximum premise size of one, can be enumerated in incremental quasi-polynomial time.
2202.11572
Rohan Chandra
Nilesh Suriyarachchi, Rohan Chandra, John S. Baras, Dinesh Manocha
GAMEOPT: Optimal Real-time Multi-Agent Planning and Control for Dynamic Intersections
Submitted to ITSC 2022
null
null
null
cs.RO
http://creativecommons.org/licenses/by/4.0/
We propose GameOpt: a novel hybrid approach to cooperative intersection control for dynamic, multi-lane, unsignalized intersections. Safely navigating these complex and accident prone intersections requires simultaneous trajectory planning and negotiation among drivers. GameOpt is a hybrid formulation that first uses an auction mechanism to generate a priority entrance sequence for every agent, followed by an optimization-based trajectory planner that computes velocity controls that satisfy the priority sequence. This coupling operates at real-time speeds of less than 10 milliseconds in high density traffic of more than 10,000 vehicles/hr, 100 times faster than other fully optimization-based methods, while providing guarantees in terms of fairness, safety, and efficiency. Tested on the SUMO simulator, our algorithm improves throughput by at least 25%, time taken to reach the goal by 75%, and fuel consumption by 33% compared to auction-based approaches and signaled approaches using traffic-lights and stop signs.
[ { "created": "Wed, 23 Feb 2022 15:42:55 GMT", "version": "v1" }, { "created": "Fri, 25 Feb 2022 05:35:19 GMT", "version": "v2" }, { "created": "Fri, 18 Mar 2022 04:19:42 GMT", "version": "v3" } ]
2022-03-21
[ [ "Suriyarachchi", "Nilesh", "" ], [ "Chandra", "Rohan", "" ], [ "Baras", "John S.", "" ], [ "Manocha", "Dinesh", "" ] ]
We propose GameOpt: a novel hybrid approach to cooperative intersection control for dynamic, multi-lane, unsignalized intersections. Safely navigating these complex and accident prone intersections requires simultaneous trajectory planning and negotiation among drivers. GameOpt is a hybrid formulation that first uses an auction mechanism to generate a priority entrance sequence for every agent, followed by an optimization-based trajectory planner that computes velocity controls that satisfy the priority sequence. This coupling operates at real-time speeds of less than 10 milliseconds in high density traffic of more than 10,000 vehicles/hr, 100 times faster than other fully optimization-based methods, while providing guarantees in terms of fairness, safety, and efficiency. Tested on the SUMO simulator, our algorithm improves throughput by at least 25%, time taken to reach the goal by 75%, and fuel consumption by 33% compared to auction-based approaches and signaled approaches using traffic-lights and stop signs.
1608.01818
Manuel Mazzara
Leonard Johard, Lukas Breitwieser, Alberto Di Meglio, Marco Manca, Manuel Mazzara, Max Talanov
The BioDynaMo Project: a platform for computer simulations of biological dynamics
The paper contains inaccurate content and claims that need to be verified
null
null
null
cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper is a brief update on developments in the BioDynaMo project, a new platform for computer simulations for biological research. We will discuss the new capabilities of the simulator, important new concepts simulation methodology as well as its numerous applications to the computational biology and nanoscience communities.
[ { "created": "Fri, 5 Aug 2016 09:55:59 GMT", "version": "v1" }, { "created": "Fri, 19 Jan 2018 12:48:57 GMT", "version": "v2" } ]
2018-01-22
[ [ "Johard", "Leonard", "" ], [ "Breitwieser", "Lukas", "" ], [ "Di Meglio", "Alberto", "" ], [ "Manca", "Marco", "" ], [ "Mazzara", "Manuel", "" ], [ "Talanov", "Max", "" ] ]
This paper is a brief update on developments in the BioDynaMo project, a new platform for computer simulations for biological research. We will discuss the new capabilities of the simulator, important new concepts simulation methodology as well as its numerous applications to the computational biology and nanoscience communities.
cs/0608002
Florentin Smarandache
Florentin Smarandache, Jean Dezert
An Introduction to the DSm Theory for the Combination of Paradoxical, Uncertain, and Imprecise Sources of Information
21 pages, many tables, figures. To appear in Information&Security International Journal, 2006
Presented at 13th International Congress of Cybernetics and Systems, Maribor, Slovenia, July 6-10, 2005.
null
null
cs.AI
null
The management and combination of uncertain, imprecise, fuzzy and even paradoxical or high conflicting sources of information has always been, and still remains today, of primal importance for the development of reliable modern information systems involving artificial reasoning. In this introduction, we present a survey of our recent theory of plausible and paradoxical reasoning, known as Dezert-Smarandache Theory (DSmT) in the literature, developed for dealing with imprecise, uncertain and paradoxical sources of information. We focus our presentation here rather on the foundations of DSmT, and on the two important new rules of combination, than on browsing specific applications of DSmT available in literature. Several simple examples are given throughout the presentation to show the efficiency and the generality of this new approach.
[ { "created": "Tue, 1 Aug 2006 15:31:13 GMT", "version": "v1" } ]
2007-05-23
[ [ "Smarandache", "Florentin", "" ], [ "Dezert", "Jean", "" ] ]
The management and combination of uncertain, imprecise, fuzzy and even paradoxical or high conflicting sources of information has always been, and still remains today, of primal importance for the development of reliable modern information systems involving artificial reasoning. In this introduction, we present a survey of our recent theory of plausible and paradoxical reasoning, known as Dezert-Smarandache Theory (DSmT) in the literature, developed for dealing with imprecise, uncertain and paradoxical sources of information. We focus our presentation here rather on the foundations of DSmT, and on the two important new rules of combination, than on browsing specific applications of DSmT available in literature. Several simple examples are given throughout the presentation to show the efficiency and the generality of this new approach.
2005.03322
Petr T\r{u}ma
Anton\'in Steinhauser and Petr T\r{u}ma
Database Traffic Interception for Graybox Detection of Stored and Context-Sensitive XSS
null
Digital Threats: Research and Practice, 1(3): 1-23, 2020
10.1145/3399668
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
XSS is a security vulnerability that permits injecting malicious code into the client side of a web application. In the simplest situations, XSS vulnerabilities arise when a web application includes the user input in the web output without due sanitization. Such simple XSS vulnerabilities can be detected fairly reliably with blackbox scanners, which inject malicious payload into sensitive parts of HTTP requests and look for the reflected values in the web output. Contemporary blackbox scanners are not effective against stored XSS vulnerabilities, where the malicious payload in an HTTP response originates from the database storage of the web application, rather than from the associated HTTP request. Similarly, many blackbox scanners do not systematically handle context-sensitive XSS vulnerabilities, where the user input is included in the web output after a transformation that prevents the scanner from recognizing the original value, but does not sanitize the value sufficiently. Among the combination of two basic data sources (stored vs reflected) and two basic vulnerability patterns (context sensitive vs not so), only one is therefore tested systematically by state-of-the-art blackbox scanners. Our work focuses on systematic coverage of the three remaining combinations. We present a graybox mechanism that extends a general purpose database to cooperate with our XSS scanner, reporting and injecting the test inputs at the boundary between the database and the web application. Furthermore, we design a mechanism for identifying the injected inputs in the web output even after encoding by the web application, and check whether the encoding sanitizes the injected inputs correctly in the respective browser context. We evaluate our approach on eight mature and technologically diverse web applications, discovering previously unknown and exploitable XSS flaws in each of those applications.
[ { "created": "Thu, 7 May 2020 08:38:38 GMT", "version": "v1" }, { "created": "Fri, 7 Aug 2020 14:55:05 GMT", "version": "v2" } ]
2020-08-10
[ [ "Steinhauser", "Antonín", "" ], [ "Tůma", "Petr", "" ] ]
XSS is a security vulnerability that permits injecting malicious code into the client side of a web application. In the simplest situations, XSS vulnerabilities arise when a web application includes the user input in the web output without due sanitization. Such simple XSS vulnerabilities can be detected fairly reliably with blackbox scanners, which inject malicious payload into sensitive parts of HTTP requests and look for the reflected values in the web output. Contemporary blackbox scanners are not effective against stored XSS vulnerabilities, where the malicious payload in an HTTP response originates from the database storage of the web application, rather than from the associated HTTP request. Similarly, many blackbox scanners do not systematically handle context-sensitive XSS vulnerabilities, where the user input is included in the web output after a transformation that prevents the scanner from recognizing the original value, but does not sanitize the value sufficiently. Among the combination of two basic data sources (stored vs reflected) and two basic vulnerability patterns (context sensitive vs not so), only one is therefore tested systematically by state-of-the-art blackbox scanners. Our work focuses on systematic coverage of the three remaining combinations. We present a graybox mechanism that extends a general purpose database to cooperate with our XSS scanner, reporting and injecting the test inputs at the boundary between the database and the web application. Furthermore, we design a mechanism for identifying the injected inputs in the web output even after encoding by the web application, and check whether the encoding sanitizes the injected inputs correctly in the respective browser context. We evaluate our approach on eight mature and technologically diverse web applications, discovering previously unknown and exploitable XSS flaws in each of those applications.
2212.01683
Chang Shi
Chang Shi, Yi Zheng, Ann Majewicz Fey
Recognition and Prediction of Surgical Gestures and Trajectories Using Transformer Models in Robot-Assisted Surgery
Accepted at 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2022
null
null
null
cs.RO cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Surgical activity recognition and prediction can help provide important context in many Robot-Assisted Surgery (RAS) applications, for example, surgical progress monitoring and estimation, surgical skill evaluation, and shared control strategies during teleoperation. Transformer models were first developed for Natural Language Processing (NLP) to model word sequences and soon the method gained popularity for general sequence modeling tasks. In this paper, we propose the novel use of a Transformer model for three tasks: gesture recognition, gesture prediction, and trajectory prediction during RAS. We modify the original Transformer architecture to be able to generate the current gesture sequence, future gesture sequence, and future trajectory sequence estimations using only the current kinematic data of the surgical robot end-effectors. We evaluate our proposed models on the JHU-ISI Gesture and Skill Assessment Working Set (JIGSAWS) and use Leave-One-User-Out (LOUO) cross-validation to ensure the generalizability of our results. Our models achieve up to 89.3\% gesture recognition accuracy, 84.6\% gesture prediction accuracy (1 second ahead) and 2.71mm trajectory prediction error (1 second ahead). Our models are comparable to and able to outperform state-of-the-art methods while using only the kinematic data channel. This approach can enable near-real time surgical activity recognition and prediction.
[ { "created": "Sat, 3 Dec 2022 20:26:48 GMT", "version": "v1" } ]
2022-12-06
[ [ "Shi", "Chang", "" ], [ "Zheng", "Yi", "" ], [ "Fey", "Ann Majewicz", "" ] ]
Surgical activity recognition and prediction can help provide important context in many Robot-Assisted Surgery (RAS) applications, for example, surgical progress monitoring and estimation, surgical skill evaluation, and shared control strategies during teleoperation. Transformer models were first developed for Natural Language Processing (NLP) to model word sequences and soon the method gained popularity for general sequence modeling tasks. In this paper, we propose the novel use of a Transformer model for three tasks: gesture recognition, gesture prediction, and trajectory prediction during RAS. We modify the original Transformer architecture to be able to generate the current gesture sequence, future gesture sequence, and future trajectory sequence estimations using only the current kinematic data of the surgical robot end-effectors. We evaluate our proposed models on the JHU-ISI Gesture and Skill Assessment Working Set (JIGSAWS) and use Leave-One-User-Out (LOUO) cross-validation to ensure the generalizability of our results. Our models achieve up to 89.3\% gesture recognition accuracy, 84.6\% gesture prediction accuracy (1 second ahead) and 2.71mm trajectory prediction error (1 second ahead). Our models are comparable to and able to outperform state-of-the-art methods while using only the kinematic data channel. This approach can enable near-real time surgical activity recognition and prediction.
2004.13965
Megha Khosla
Vikram Waradpande, Daniel Kudenko, Megha Khosla
Graph-based State Representation for Deep Reinforcement Learning
null
null
null
null
cs.LG stat.ML
http://creativecommons.org/licenses/by/4.0/
Deep RL approaches build much of their success on the ability of the deep neural network to generate useful internal representations. Nevertheless, they suffer from a high sample-complexity and starting with a good input representation can have a significant impact on the performance. In this paper, we exploit the fact that the underlying Markov decision process (MDP) represents a graph, which enables us to incorporate the topological information for effective state representation learning. Motivated by the recent success of node representations for several graph analytical tasks we specifically investigate the capability of node representation learning methods to effectively encode the topology of the underlying MDP in Deep RL. To this end we perform a comparative analysis of several models chosen from 4 different classes of representation learning algorithms for policy learning in grid-world navigation tasks, which are representative of a large class of RL problems. We find that all embedding methods outperform the commonly used matrix representation of grid-world environments in all of the studied cases. Moreoever, graph convolution based methods are outperformed by simpler random walk based methods and graph linear autoencoders.
[ { "created": "Wed, 29 Apr 2020 05:43:15 GMT", "version": "v1" }, { "created": "Fri, 20 Nov 2020 15:31:22 GMT", "version": "v2" }, { "created": "Tue, 16 Feb 2021 17:49:34 GMT", "version": "v3" } ]
2021-02-17
[ [ "Waradpande", "Vikram", "" ], [ "Kudenko", "Daniel", "" ], [ "Khosla", "Megha", "" ] ]
Deep RL approaches build much of their success on the ability of the deep neural network to generate useful internal representations. Nevertheless, they suffer from a high sample-complexity and starting with a good input representation can have a significant impact on the performance. In this paper, we exploit the fact that the underlying Markov decision process (MDP) represents a graph, which enables us to incorporate the topological information for effective state representation learning. Motivated by the recent success of node representations for several graph analytical tasks we specifically investigate the capability of node representation learning methods to effectively encode the topology of the underlying MDP in Deep RL. To this end we perform a comparative analysis of several models chosen from 4 different classes of representation learning algorithms for policy learning in grid-world navigation tasks, which are representative of a large class of RL problems. We find that all embedding methods outperform the commonly used matrix representation of grid-world environments in all of the studied cases. Moreoever, graph convolution based methods are outperformed by simpler random walk based methods and graph linear autoencoders.
2405.08337
Benjamin Sinclair
Benjamin Sinclair, Lucy Vivash, Jasmine Moses, Miranda Lynch, William Pham, Karina Dorfman, Cassandra Marotta, Shaun Koh, Jacob Bunyamin, Ella Rowsthorn, Alex Jarema, Himashi Peiris, Zhaolin Chen, Sandy R Shultz, David K Wright, Dexiao Kong, Sharon L. Naismith, Terence J. O\'Brien, Meng Law
Perivascular space Identification Nnunet for Generalised Usage (PINGU)
null
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by/4.0/
Perivascular spaces(PVSs) form a central component of the brain\'s waste clearance system, the glymphatic system. These structures are visible on MRI images, and their morphology is associated with aging and neurological disease. Manual quantification of PVS is time consuming and subjective. Numerous deep learning methods for PVS segmentation have been developed, however the majority have been developed and evaluated on homogenous datasets and high resolution scans, perhaps limiting their applicability for the wide range of image qualities acquired in clinic and research. In this work we train a nnUNet, a top-performing biomedical image segmentation algorithm, on a heterogenous training sample of manually segmented MRI images of a range of different qualities and resolutions from 6 different datasets. These are compared to publicly available deep learning methods for 3D segmentation of PVS. The resulting model, PINGU (Perivascular space Identification Nnunet for Generalised Usage), achieved voxel and cluster level dice scores of 0.50(SD=0.15), 0.63(0.17) in the white matter(WM), and 0.54(0.11), 0.66(0.17) in the basal ganglia(BG). Performance on data from unseen sites was substantially lower for both PINGU(0.20-0.38(WM, voxel), 0.29-0.58(WM, cluster), 0.22-0.36(BG, voxel), 0.46-0.60(BG, cluster)) and the publicly available algorithms(0.18-0.30(WM, voxel), 0.29-0.38(WM cluster), 0.10-0.20(BG, voxel), 0.15-0.37(BG, cluster)), but PINGU strongly outperformed the publicly available algorithms, particularly in the BG. Finally, training PINGU on manual segmentations from a single site with homogenous scan properties gave marginally lower performances on internal cross-validation, but in some cases gave higher performance on external validation. PINGU stands out as broad-use PVS segmentation tool, with particular strength in the BG, an area of PVS related to vascular disease and pathology.
[ { "created": "Tue, 14 May 2024 06:16:13 GMT", "version": "v1" }, { "created": "Fri, 17 May 2024 06:47:44 GMT", "version": "v2" } ]
2024-05-20
[ [ "Sinclair", "Benjamin", "" ], [ "Vivash", "Lucy", "" ], [ "Moses", "Jasmine", "" ], [ "Lynch", "Miranda", "" ], [ "Pham", "William", "" ], [ "Dorfman", "Karina", "" ], [ "Marotta", "Cassandra", "" ], [ "Koh", "Shaun", "" ], [ "Bunyamin", "Jacob", "" ], [ "Rowsthorn", "Ella", "" ], [ "Jarema", "Alex", "" ], [ "Peiris", "Himashi", "" ], [ "Chen", "Zhaolin", "" ], [ "Shultz", "Sandy R", "" ], [ "Wright", "David K", "" ], [ "Kong", "Dexiao", "" ], [ "Naismith", "Sharon L.", "" ], [ "OBrien", "Terence J.", "" ], [ "Law", "Meng", "" ] ]
Perivascular spaces(PVSs) form a central component of the brain\'s waste clearance system, the glymphatic system. These structures are visible on MRI images, and their morphology is associated with aging and neurological disease. Manual quantification of PVS is time consuming and subjective. Numerous deep learning methods for PVS segmentation have been developed, however the majority have been developed and evaluated on homogenous datasets and high resolution scans, perhaps limiting their applicability for the wide range of image qualities acquired in clinic and research. In this work we train a nnUNet, a top-performing biomedical image segmentation algorithm, on a heterogenous training sample of manually segmented MRI images of a range of different qualities and resolutions from 6 different datasets. These are compared to publicly available deep learning methods for 3D segmentation of PVS. The resulting model, PINGU (Perivascular space Identification Nnunet for Generalised Usage), achieved voxel and cluster level dice scores of 0.50(SD=0.15), 0.63(0.17) in the white matter(WM), and 0.54(0.11), 0.66(0.17) in the basal ganglia(BG). Performance on data from unseen sites was substantially lower for both PINGU(0.20-0.38(WM, voxel), 0.29-0.58(WM, cluster), 0.22-0.36(BG, voxel), 0.46-0.60(BG, cluster)) and the publicly available algorithms(0.18-0.30(WM, voxel), 0.29-0.38(WM cluster), 0.10-0.20(BG, voxel), 0.15-0.37(BG, cluster)), but PINGU strongly outperformed the publicly available algorithms, particularly in the BG. Finally, training PINGU on manual segmentations from a single site with homogenous scan properties gave marginally lower performances on internal cross-validation, but in some cases gave higher performance on external validation. PINGU stands out as broad-use PVS segmentation tool, with particular strength in the BG, an area of PVS related to vascular disease and pathology.
2403.11472
Minsu Kim
Minsu Kim, Jinwoo Hwang, Guseul Heo, Seiyeon Cho, Divya Mahajan, Jongse Park
Accelerating String-Key Learned Index Structures via Memoization-based Incremental Training
Accepted at VLDB '24; 12 pages + 2 pages (ref), 18 figures, 2 tables
null
null
null
cs.LG cs.AR cs.DB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Learned indexes use machine learning models to learn the mappings between keys and their corresponding positions in key-value indexes. These indexes use the mapping information as training data. Learned indexes require frequent retrainings of their models to incorporate the changes introduced by update queries. To efficiently retrain the models, existing learned index systems often harness a linear algebraic QR factorization technique that performs matrix decomposition. This factorization approach processes all key-position pairs during each retraining, resulting in compute operations that grow linearly with the total number of keys and their lengths. Consequently, the retrainings create a severe performance bottleneck, especially for variable-length string keys, while the retrainings are crucial for maintaining high prediction accuracy and in turn, ensuring low query service latency. To address this performance problem, we develop an algorithm-hardware co-designed string-key learned index system, dubbed SIA. In designing SIA, we leverage a unique algorithmic property of the matrix decomposition-based training method. Exploiting the property, we develop a memoization-based incremental training scheme, which only requires computation over updated keys, while decomposition results of non-updated keys from previous computations can be reused. We further enhance SIA to offload a portion of this training process to an FPGA accelerator to not only relieve CPU resources for serving index queries (i.e., inference), but also accelerate the training itself. Our evaluation shows that compared to ALEX, LIPP, and SIndex, a state-of-the-art learned index systems, SIA-accelerated learned indexes offer 2.6x and 3.4x higher throughput on the two real-world benchmark suites, YCSB and Twitter cache trace, respectively.
[ { "created": "Mon, 18 Mar 2024 04:44:00 GMT", "version": "v1" } ]
2024-03-19
[ [ "Kim", "Minsu", "" ], [ "Hwang", "Jinwoo", "" ], [ "Heo", "Guseul", "" ], [ "Cho", "Seiyeon", "" ], [ "Mahajan", "Divya", "" ], [ "Park", "Jongse", "" ] ]
Learned indexes use machine learning models to learn the mappings between keys and their corresponding positions in key-value indexes. These indexes use the mapping information as training data. Learned indexes require frequent retrainings of their models to incorporate the changes introduced by update queries. To efficiently retrain the models, existing learned index systems often harness a linear algebraic QR factorization technique that performs matrix decomposition. This factorization approach processes all key-position pairs during each retraining, resulting in compute operations that grow linearly with the total number of keys and their lengths. Consequently, the retrainings create a severe performance bottleneck, especially for variable-length string keys, while the retrainings are crucial for maintaining high prediction accuracy and in turn, ensuring low query service latency. To address this performance problem, we develop an algorithm-hardware co-designed string-key learned index system, dubbed SIA. In designing SIA, we leverage a unique algorithmic property of the matrix decomposition-based training method. Exploiting the property, we develop a memoization-based incremental training scheme, which only requires computation over updated keys, while decomposition results of non-updated keys from previous computations can be reused. We further enhance SIA to offload a portion of this training process to an FPGA accelerator to not only relieve CPU resources for serving index queries (i.e., inference), but also accelerate the training itself. Our evaluation shows that compared to ALEX, LIPP, and SIndex, a state-of-the-art learned index systems, SIA-accelerated learned indexes offer 2.6x and 3.4x higher throughput on the two real-world benchmark suites, YCSB and Twitter cache trace, respectively.
2407.00021
Qiqi Hou
Farzad Farhadzadeh, Qiqi Hou, Hoang Le, Amir Said, Randall Rauwendaal, Alex Bourd, Fatih Porikli
Neural Graphics Texture Compression Supporting Random Acces
ECCV submission
null
null
null
cs.CV cs.GR eess.IV
http://creativecommons.org/licenses/by/4.0/
Advances in rendering have led to tremendous growth in texture assets, including resolution, complexity, and novel textures components, but this growth in data volume has not been matched by advances in its compression. Meanwhile Neural Image Compression (NIC) has advanced significantly and shown promising results, but the proposed methods cannot be directly adapted to neural texture compression. First, texture compression requires on-demand and real-time decoding with random access during parallel rendering (e.g. block texture decompression on GPUs). Additionally, NIC does not support multi-resolution reconstruction (mip-levels), nor does it have the ability to efficiently jointly compress different sets of texture channels. In this work, we introduce a novel approach to texture set compression that integrates traditional GPU texture representation and NIC techniques, designed to enable random access and support many-channel texture sets. To achieve this goal, we propose an asymmetric auto-encoder framework that employs a convolutional encoder to capture detailed information in a bottleneck-latent space, and at decoder side we utilize a fully connected network, whose inputs are sampled latent features plus positional information, for a given texture coordinate and mip level. This latent data is defined to enable simplified access to multi-resolution data by simply changing the scanning strides. Experimental results demonstrate that this approach provides much better results than conventional texture compression, and significant improvement over the latest method using neural networks.
[ { "created": "Mon, 6 May 2024 19:44:13 GMT", "version": "v1" } ]
2024-07-02
[ [ "Farhadzadeh", "Farzad", "" ], [ "Hou", "Qiqi", "" ], [ "Le", "Hoang", "" ], [ "Said", "Amir", "" ], [ "Rauwendaal", "Randall", "" ], [ "Bourd", "Alex", "" ], [ "Porikli", "Fatih", "" ] ]
Advances in rendering have led to tremendous growth in texture assets, including resolution, complexity, and novel textures components, but this growth in data volume has not been matched by advances in its compression. Meanwhile Neural Image Compression (NIC) has advanced significantly and shown promising results, but the proposed methods cannot be directly adapted to neural texture compression. First, texture compression requires on-demand and real-time decoding with random access during parallel rendering (e.g. block texture decompression on GPUs). Additionally, NIC does not support multi-resolution reconstruction (mip-levels), nor does it have the ability to efficiently jointly compress different sets of texture channels. In this work, we introduce a novel approach to texture set compression that integrates traditional GPU texture representation and NIC techniques, designed to enable random access and support many-channel texture sets. To achieve this goal, we propose an asymmetric auto-encoder framework that employs a convolutional encoder to capture detailed information in a bottleneck-latent space, and at decoder side we utilize a fully connected network, whose inputs are sampled latent features plus positional information, for a given texture coordinate and mip level. This latent data is defined to enable simplified access to multi-resolution data by simply changing the scanning strides. Experimental results demonstrate that this approach provides much better results than conventional texture compression, and significant improvement over the latest method using neural networks.
2110.15444
Jinyuan Jia
Jinyuan Jia, Hongbin Liu, Neil Zhenqiang Gong
10 Security and Privacy Problems in Large Foundation Models
A book chapter
null
null
null
cs.CR cs.LG
http://creativecommons.org/licenses/by/4.0/
Foundation models--such as GPT, CLIP, and DINO--have achieved revolutionary progress in the past several years and are commonly believed to be a promising approach for general-purpose AI. In particular, self-supervised learning is adopted to pre-train a foundation model using a large amount of unlabeled data. A pre-trained foundation model is like an ``operating system'' of the AI ecosystem. Specifically, a foundation model can be used as a feature extractor for many downstream tasks with little or no labeled training data. Existing studies on foundation models mainly focused on pre-training a better foundation model to improve its performance on downstream tasks in non-adversarial settings, leaving its security and privacy in adversarial settings largely unexplored. A security or privacy issue of a pre-trained foundation model leads to a single point of failure for the AI ecosystem. In this book chapter, we discuss 10 basic security and privacy problems for the pre-trained foundation models, including six confidentiality problems, three integrity problems, and one availability problem. For each problem, we discuss potential opportunities and challenges. We hope our book chapter will inspire future research on the security and privacy of foundation models.
[ { "created": "Thu, 28 Oct 2021 21:45:53 GMT", "version": "v1" }, { "created": "Tue, 2 Nov 2021 02:12:12 GMT", "version": "v2" }, { "created": "Fri, 9 Jun 2023 15:53:54 GMT", "version": "v3" } ]
2023-06-12
[ [ "Jia", "Jinyuan", "" ], [ "Liu", "Hongbin", "" ], [ "Gong", "Neil Zhenqiang", "" ] ]
Foundation models--such as GPT, CLIP, and DINO--have achieved revolutionary progress in the past several years and are commonly believed to be a promising approach for general-purpose AI. In particular, self-supervised learning is adopted to pre-train a foundation model using a large amount of unlabeled data. A pre-trained foundation model is like an ``operating system'' of the AI ecosystem. Specifically, a foundation model can be used as a feature extractor for many downstream tasks with little or no labeled training data. Existing studies on foundation models mainly focused on pre-training a better foundation model to improve its performance on downstream tasks in non-adversarial settings, leaving its security and privacy in adversarial settings largely unexplored. A security or privacy issue of a pre-trained foundation model leads to a single point of failure for the AI ecosystem. In this book chapter, we discuss 10 basic security and privacy problems for the pre-trained foundation models, including six confidentiality problems, three integrity problems, and one availability problem. For each problem, we discuss potential opportunities and challenges. We hope our book chapter will inspire future research on the security and privacy of foundation models.
1706.07567
Chao-Yuan Wu
Chao-Yuan Wu, R. Manmatha, Alexander J. Smola, Philipp Kr\"ahenb\"uhl
Sampling Matters in Deep Embedding Learning
Add supplementary material. Paper published in ICCV 2017
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep embeddings answer one simple question: How similar are two images? Learning these embeddings is the bedrock of verification, zero-shot learning, and visual search. The most prominent approaches optimize a deep convolutional network with a suitable loss function, such as contrastive loss or triplet loss. While a rich line of work focuses solely on the loss functions, we show in this paper that selecting training examples plays an equally important role. We propose distance weighted sampling, which selects more informative and stable examples than traditional approaches. In addition, we show that a simple margin based loss is sufficient to outperform all other loss functions. We evaluate our approach on the Stanford Online Products, CAR196, and the CUB200-2011 datasets for image retrieval and clustering, and on the LFW dataset for face verification. Our method achieves state-of-the-art performance on all of them.
[ { "created": "Fri, 23 Jun 2017 05:14:55 GMT", "version": "v1" }, { "created": "Tue, 16 Jan 2018 16:54:27 GMT", "version": "v2" } ]
2018-01-17
[ [ "Wu", "Chao-Yuan", "" ], [ "Manmatha", "R.", "" ], [ "Smola", "Alexander J.", "" ], [ "Krähenbühl", "Philipp", "" ] ]
Deep embeddings answer one simple question: How similar are two images? Learning these embeddings is the bedrock of verification, zero-shot learning, and visual search. The most prominent approaches optimize a deep convolutional network with a suitable loss function, such as contrastive loss or triplet loss. While a rich line of work focuses solely on the loss functions, we show in this paper that selecting training examples plays an equally important role. We propose distance weighted sampling, which selects more informative and stable examples than traditional approaches. In addition, we show that a simple margin based loss is sufficient to outperform all other loss functions. We evaluate our approach on the Stanford Online Products, CAR196, and the CUB200-2011 datasets for image retrieval and clustering, and on the LFW dataset for face verification. Our method achieves state-of-the-art performance on all of them.
1706.00176
Anh Nguyen
Anh Nguyen
3DTouch: Towards a Wearable 3D Input Device for 3D Applications
MS thesis, University of Wyoming, ProQuest
null
null
null
cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Three-dimensional (3D) applications have come to every corner of life. We present 3DTouch, a novel 3D wearable input device worn on the fingertip for interacting with 3D applications. 3DTouch is self-contained, and designed to universally work on various 3D platforms. The device employs touch input for the benefits of passive haptic feedback, and movement stability. Moreover, with touch interaction, 3DTouch is conceptually less fatiguing to use over many hours than 3D spatial input devices such as Kinect. Our approach relies on relative positioning technique using an optical laser sensor and a 9-DOF inertial measurement unit. We implemented a set of 3D interaction techniques including selection, translation, and rotation using 3DTouch. An evaluation also demonstrates the device's tracking accuracy of 1.10 mm and 2.33 degrees for subtle touch interaction in 3D space. With 3DTouch project, we would like to provide an input device that reduces the gap between 3D applications and users.
[ { "created": "Thu, 1 Jun 2017 06:43:54 GMT", "version": "v1" } ]
2017-06-02
[ [ "Nguyen", "Anh", "" ] ]
Three-dimensional (3D) applications have come to every corner of life. We present 3DTouch, a novel 3D wearable input device worn on the fingertip for interacting with 3D applications. 3DTouch is self-contained, and designed to universally work on various 3D platforms. The device employs touch input for the benefits of passive haptic feedback, and movement stability. Moreover, with touch interaction, 3DTouch is conceptually less fatiguing to use over many hours than 3D spatial input devices such as Kinect. Our approach relies on relative positioning technique using an optical laser sensor and a 9-DOF inertial measurement unit. We implemented a set of 3D interaction techniques including selection, translation, and rotation using 3DTouch. An evaluation also demonstrates the device's tracking accuracy of 1.10 mm and 2.33 degrees for subtle touch interaction in 3D space. With 3DTouch project, we would like to provide an input device that reduces the gap between 3D applications and users.
2305.17050
Cunxiang Wang
Cunxiang Wang, Zhikun Xu, Qipeng Guo, Xiangkun Hu, Xuefeng Bai, Zheng Zhang, Yue Zhang
Exploiting Abstract Meaning Representation for Open-Domain Question Answering
Accepted by ACL2023 findings, reviewer scores: 4 4 4
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
The Open-Domain Question Answering (ODQA) task involves retrieving and subsequently generating answers from fine-grained relevant passages within a database. Current systems leverage Pretrained Language Models (PLMs) to model the relationship between questions and passages. However, the diversity in surface form expressions can hinder the model's ability to capture accurate correlations, especially within complex contexts. Therefore, we utilize Abstract Meaning Representation (AMR) graphs to assist the model in understanding complex semantic information. We introduce a method known as Graph-as-Token (GST) to incorporate AMRs into PLMs. Results from Natural Questions (NQ) and TriviaQA (TQ) demonstrate that our GST method can significantly improve performance, resulting in up to 2.44/3.17 Exact Match score improvements on NQ/TQ respectively. Furthermore, our method enhances robustness and outperforms alternative Graph Neural Network (GNN) methods for integrating AMRs. To the best of our knowledge, we are the first to employ semantic graphs in ODQA.
[ { "created": "Fri, 26 May 2023 16:00:16 GMT", "version": "v1" } ]
2023-05-29
[ [ "Wang", "Cunxiang", "" ], [ "Xu", "Zhikun", "" ], [ "Guo", "Qipeng", "" ], [ "Hu", "Xiangkun", "" ], [ "Bai", "Xuefeng", "" ], [ "Zhang", "Zheng", "" ], [ "Zhang", "Yue", "" ] ]
The Open-Domain Question Answering (ODQA) task involves retrieving and subsequently generating answers from fine-grained relevant passages within a database. Current systems leverage Pretrained Language Models (PLMs) to model the relationship between questions and passages. However, the diversity in surface form expressions can hinder the model's ability to capture accurate correlations, especially within complex contexts. Therefore, we utilize Abstract Meaning Representation (AMR) graphs to assist the model in understanding complex semantic information. We introduce a method known as Graph-as-Token (GST) to incorporate AMRs into PLMs. Results from Natural Questions (NQ) and TriviaQA (TQ) demonstrate that our GST method can significantly improve performance, resulting in up to 2.44/3.17 Exact Match score improvements on NQ/TQ respectively. Furthermore, our method enhances robustness and outperforms alternative Graph Neural Network (GNN) methods for integrating AMRs. To the best of our knowledge, we are the first to employ semantic graphs in ODQA.
2407.10887
Ahmed Salem
Mark Russinovich and Ahmed Salem
Hey, That's My Model! Introducing Chain & Hash, An LLM Fingerprinting Technique
null
null
null
null
cs.CR cs.AI
http://creativecommons.org/licenses/by/4.0/
Amid growing concerns over the ease of theft and misuse of Large Language Models (LLMs), the need for fingerprinting models has increased. Fingerprinting, in this context, means that the model owner can link a given model to their original version, thereby identifying if their model is being misused or has been completely stolen. In this paper, we first define a set five properties a successful fingerprint should satisfy; namely, the fingerprint should be Transparent, Efficient, Persistent, Robust, and Unforgeable. Next, we propose Chain & Hash, a new, simple fingerprinting approach that implements a fingerprint with a cryptographic flavor, achieving all these properties. Chain & Hash involves generating a set of questions (the fingerprints) along with a set of potential answers. These elements are hashed together using a secure hashing technique to select the value for each question, hence providing an unforgeability property-preventing adversaries from claiming false ownership. We evaluate the Chain & Hash technique on multiple models and demonstrate its robustness against benign transformations, such as fine-tuning on different datasets, and adversarial attempts to erase the fingerprint. Finally, our experiments demonstrate the efficiency of implementing Chain & Hash and its utility, where fingerprinted models achieve almost the same performance as non-fingerprinted ones across different benchmarks.
[ { "created": "Mon, 15 Jul 2024 16:38:56 GMT", "version": "v1" }, { "created": "Wed, 17 Jul 2024 07:39:41 GMT", "version": "v2" } ]
2024-07-18
[ [ "Russinovich", "Mark", "" ], [ "Salem", "Ahmed", "" ] ]
Amid growing concerns over the ease of theft and misuse of Large Language Models (LLMs), the need for fingerprinting models has increased. Fingerprinting, in this context, means that the model owner can link a given model to their original version, thereby identifying if their model is being misused or has been completely stolen. In this paper, we first define a set five properties a successful fingerprint should satisfy; namely, the fingerprint should be Transparent, Efficient, Persistent, Robust, and Unforgeable. Next, we propose Chain & Hash, a new, simple fingerprinting approach that implements a fingerprint with a cryptographic flavor, achieving all these properties. Chain & Hash involves generating a set of questions (the fingerprints) along with a set of potential answers. These elements are hashed together using a secure hashing technique to select the value for each question, hence providing an unforgeability property-preventing adversaries from claiming false ownership. We evaluate the Chain & Hash technique on multiple models and demonstrate its robustness against benign transformations, such as fine-tuning on different datasets, and adversarial attempts to erase the fingerprint. Finally, our experiments demonstrate the efficiency of implementing Chain & Hash and its utility, where fingerprinted models achieve almost the same performance as non-fingerprinted ones across different benchmarks.
2006.13561
Tung Nguyen Thanh
Thanh-Tung Nguyen, Xuan-Phi Nguyen, Shafiq Joty, Xiaoli Li
Differentiable Window for Dynamic Local Attention
Accepted at ACL 2020
null
null
null
cs.LG cs.CL stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose Differentiable Window, a new neural module and general purpose component for dynamic window selection. While universally applicable, we demonstrate a compelling use case of utilizing Differentiable Window to improve standard attention modules by enabling more focused attentions over the input regions. We propose two variants of Differentiable Window, and integrate them within the Transformer architecture in two novel ways. We evaluate our proposed approach on a myriad of NLP tasks, including machine translation, sentiment analysis, subject-verb agreement and language modeling. Our experimental results demonstrate consistent and sizable improvements across all tasks.
[ { "created": "Wed, 24 Jun 2020 08:47:26 GMT", "version": "v1" } ]
2020-06-25
[ [ "Nguyen", "Thanh-Tung", "" ], [ "Nguyen", "Xuan-Phi", "" ], [ "Joty", "Shafiq", "" ], [ "Li", "Xiaoli", "" ] ]
We propose Differentiable Window, a new neural module and general purpose component for dynamic window selection. While universally applicable, we demonstrate a compelling use case of utilizing Differentiable Window to improve standard attention modules by enabling more focused attentions over the input regions. We propose two variants of Differentiable Window, and integrate them within the Transformer architecture in two novel ways. We evaluate our proposed approach on a myriad of NLP tasks, including machine translation, sentiment analysis, subject-verb agreement and language modeling. Our experimental results demonstrate consistent and sizable improvements across all tasks.
1506.01058
Bahar Partov
Bahar Partov, Douglas J. Leith
Utility Fair Rate Allocation in LTE/802.11 Networks
13 pages, submitted to IEEE/ACM Transactions on Networking
null
10.1109/TNET.2016.2614252
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider proportional fair rate allocation in a heterogeneous network with a mix of LTE and 802.11 cells which supports multipath and multihomed operation (simultaneous connection of a user device to multiple LTE BSs and 802.11 APs). We show that the utility fair optimisation problem is non-convex but that a global optimum can be found by solving a sequence of convex optimisations in a distributed fashion. The result is a principled approach to offload from LTE to 802.11 and for exploiting LTE/802.11 path diversity to meet user traffic demands.
[ { "created": "Tue, 2 Jun 2015 20:54:29 GMT", "version": "v1" } ]
2016-11-18
[ [ "Partov", "Bahar", "" ], [ "Leith", "Douglas J.", "" ] ]
We consider proportional fair rate allocation in a heterogeneous network with a mix of LTE and 802.11 cells which supports multipath and multihomed operation (simultaneous connection of a user device to multiple LTE BSs and 802.11 APs). We show that the utility fair optimisation problem is non-convex but that a global optimum can be found by solving a sequence of convex optimisations in a distributed fashion. The result is a principled approach to offload from LTE to 802.11 and for exploiting LTE/802.11 path diversity to meet user traffic demands.
2108.01513
Weiyang Liu
Yandong Wen, Weiyang Liu, Adrian Weller, Bhiksha Raj, Rita Singh
SphereFace2: Binary Classification is All You Need for Deep Face Recognition
ICLR 2022 Spotlight (v3: Updated Appendix)
null
null
null
cs.CV cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
State-of-the-art deep face recognition methods are mostly trained with a softmax-based multi-class classification framework. Despite being popular and effective, these methods still have a few shortcomings that limit empirical performance. In this paper, we start by identifying the discrepancy between training and evaluation in the existing multi-class classification framework and then discuss the potential limitations caused by the "competitive" nature of softmax normalization. Motivated by these limitations, we propose a novel binary classification training framework, termed SphereFace2. In contrast to existing methods, SphereFace2 circumvents the softmax normalization, as well as the corresponding closed-set assumption. This effectively bridges the gap between training and evaluation, enabling the representations to be improved individually by each binary classification task. Besides designing a specific well-performing loss function, we summarize a few general principles for this "one-vs-all" binary classification framework so that it can outperform current competitive methods. Our experiments on popular benchmarks demonstrate that SphereFace2 can consistently outperform state-of-the-art deep face recognition methods. The code has been made publicly available.
[ { "created": "Tue, 3 Aug 2021 13:58:45 GMT", "version": "v1" }, { "created": "Wed, 16 Mar 2022 06:58:28 GMT", "version": "v2" }, { "created": "Mon, 11 Apr 2022 03:49:28 GMT", "version": "v3" } ]
2022-04-12
[ [ "Wen", "Yandong", "" ], [ "Liu", "Weiyang", "" ], [ "Weller", "Adrian", "" ], [ "Raj", "Bhiksha", "" ], [ "Singh", "Rita", "" ] ]
State-of-the-art deep face recognition methods are mostly trained with a softmax-based multi-class classification framework. Despite being popular and effective, these methods still have a few shortcomings that limit empirical performance. In this paper, we start by identifying the discrepancy between training and evaluation in the existing multi-class classification framework and then discuss the potential limitations caused by the "competitive" nature of softmax normalization. Motivated by these limitations, we propose a novel binary classification training framework, termed SphereFace2. In contrast to existing methods, SphereFace2 circumvents the softmax normalization, as well as the corresponding closed-set assumption. This effectively bridges the gap between training and evaluation, enabling the representations to be improved individually by each binary classification task. Besides designing a specific well-performing loss function, we summarize a few general principles for this "one-vs-all" binary classification framework so that it can outperform current competitive methods. Our experiments on popular benchmarks demonstrate that SphereFace2 can consistently outperform state-of-the-art deep face recognition methods. The code has been made publicly available.
1611.00491
Huimei Han
Huimei Han, Xudong Guo, Ying Li
A High Throughput Pilot Allocation for M2M Communication in Crowded Massive MIMO Systems
5 pages,6 figures
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A new scheme to resolve the intra-cell pilot collision for M2M communication in crowded massive multiple-input multiple-output (MIMO) systems is proposed. The proposed scheme permits those failed user equipments (UEs), judged by a strongest-user collision resolution (SUCR) protocol, to contend for the idle pilots, i.e., the pilots that are not selected by any UE in the initial step. This scheme is called as SUCR combined idle pilots access (SUCR-IPA). To analyze the performance of the SUCR-IPA scheme, we develop a simple method to compute the access success probability of the UEs in each random access slot (RAST). The simulation results coincide well with the analysis. It is also shown that, compared to the SUCR protocol, the proposed SUCR-IPA scheme increases the throughput of the system significantly, and thus decreases the number of access attempts dramatically.
[ { "created": "Wed, 2 Nov 2016 07:36:23 GMT", "version": "v1" } ]
2016-11-03
[ [ "Han", "Huimei", "" ], [ "Guo", "Xudong", "" ], [ "Li", "Ying", "" ] ]
A new scheme to resolve the intra-cell pilot collision for M2M communication in crowded massive multiple-input multiple-output (MIMO) systems is proposed. The proposed scheme permits those failed user equipments (UEs), judged by a strongest-user collision resolution (SUCR) protocol, to contend for the idle pilots, i.e., the pilots that are not selected by any UE in the initial step. This scheme is called as SUCR combined idle pilots access (SUCR-IPA). To analyze the performance of the SUCR-IPA scheme, we develop a simple method to compute the access success probability of the UEs in each random access slot (RAST). The simulation results coincide well with the analysis. It is also shown that, compared to the SUCR protocol, the proposed SUCR-IPA scheme increases the throughput of the system significantly, and thus decreases the number of access attempts dramatically.
2010.14121
Jun Zhuang
Jun Zhuang, Mohammad Al Hasan
Deperturbation of Online Social Networks via Bayesian Label Transition
TL;DR: GraphLT is the first model that adapts the Bayesian label transition method on GCNs for deperturbation in online social networks. Our work is accepted by SDM 2022
null
null
null
cs.LG cs.SI physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Online social networks (OSNs) classify users into different categories based on their online activities and interests, a task which is referred as a node classification task. Such a task can be solved effectively using Graph Convolutional Networks (GCNs). However, a small number of users, so-called perturbators, may perform random activities on an OSN, which significantly deteriorate the performance of a GCN-based node classification task. Existing works in this direction defend GCNs either by adversarial training or by identifying the attacker nodes followed by their removal. However, both of these approaches require that the attack patterns or attacker nodes be identified first, which is difficult in the scenario when the number of perturbator nodes is very small. In this work, we develop a GCN defense model, namely GraphLT, which uses the concept of label transition. GraphLT assumes that perturbators' random activities deteriorate GCN's performance. To overcome this issue, GraphLT subsequently uses a novel Bayesian label transition model, which takes GCN's predicted labels and applies label transitions by Gibbs-sampling-based inference and thus repairs GCN's prediction to achieve better node classification. Extensive experiments on seven benchmark datasets show that GraphLT considerably enhances the performance of the node classifier in an unperturbed environment; furthermore, it validates that GraphLT can successfully repair a GCN-based node classifier with superior performance than several competing methods.
[ { "created": "Tue, 27 Oct 2020 08:15:12 GMT", "version": "v1" }, { "created": "Mon, 2 Nov 2020 01:56:07 GMT", "version": "v2" }, { "created": "Tue, 18 Jan 2022 23:55:23 GMT", "version": "v3" } ]
2022-01-20
[ [ "Zhuang", "Jun", "" ], [ "Hasan", "Mohammad Al", "" ] ]
Online social networks (OSNs) classify users into different categories based on their online activities and interests, a task which is referred as a node classification task. Such a task can be solved effectively using Graph Convolutional Networks (GCNs). However, a small number of users, so-called perturbators, may perform random activities on an OSN, which significantly deteriorate the performance of a GCN-based node classification task. Existing works in this direction defend GCNs either by adversarial training or by identifying the attacker nodes followed by their removal. However, both of these approaches require that the attack patterns or attacker nodes be identified first, which is difficult in the scenario when the number of perturbator nodes is very small. In this work, we develop a GCN defense model, namely GraphLT, which uses the concept of label transition. GraphLT assumes that perturbators' random activities deteriorate GCN's performance. To overcome this issue, GraphLT subsequently uses a novel Bayesian label transition model, which takes GCN's predicted labels and applies label transitions by Gibbs-sampling-based inference and thus repairs GCN's prediction to achieve better node classification. Extensive experiments on seven benchmark datasets show that GraphLT considerably enhances the performance of the node classifier in an unperturbed environment; furthermore, it validates that GraphLT can successfully repair a GCN-based node classifier with superior performance than several competing methods.
2210.03839
Nina Pardal
Ivo Koch, Nina Pardal, Vinicius Fernandes dos Santos
Edge deletion to tree-like graph classes
10 pages, no figures
null
null
null
cs.DM cs.CC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
For a fixed property (graph class) ${\Pi}$, given a graph G and an integer k, the ${\Pi}$-deletion problem consists in deciding if we can turn $G$ into a graph with the property ${\Pi}$ by deleting at most $k$ edges. The ${\Pi}$-deletion problem is known to be NP-hard for most of the well-studied graph classes, such as chordal, interval, bipartite, planar, comparability and permutation graphs, among others; even deletion to cacti is known to be NP-hard for general graphs. However, there is a notable exception: the deletion problem to trees is polynomial. Motivated by this fact, we study the deletion problem for some classes similar to trees, addressing in this way a knowledge gap in the literature. We prove that deletion to cacti is hard even when the input is a bipartite graph. On the positive side, we show that the problem becomes tractable when the input is chordal, and for the special case of quasi-threshold graphs we give a simpler and faster algorithm. In addition, we present sufficient structural conditions on the graph class ${\Pi}$ that imply the NP-hardness of the ${\Pi}$-deletion problem, and show that deletion from general graphs to some well-known subclasses of forests is NP-hard.
[ { "created": "Fri, 7 Oct 2022 22:25:07 GMT", "version": "v1" }, { "created": "Thu, 13 Jul 2023 16:29:55 GMT", "version": "v2" } ]
2023-07-14
[ [ "Koch", "Ivo", "" ], [ "Pardal", "Nina", "" ], [ "Santos", "Vinicius Fernandes dos", "" ] ]
For a fixed property (graph class) ${\Pi}$, given a graph G and an integer k, the ${\Pi}$-deletion problem consists in deciding if we can turn $G$ into a graph with the property ${\Pi}$ by deleting at most $k$ edges. The ${\Pi}$-deletion problem is known to be NP-hard for most of the well-studied graph classes, such as chordal, interval, bipartite, planar, comparability and permutation graphs, among others; even deletion to cacti is known to be NP-hard for general graphs. However, there is a notable exception: the deletion problem to trees is polynomial. Motivated by this fact, we study the deletion problem for some classes similar to trees, addressing in this way a knowledge gap in the literature. We prove that deletion to cacti is hard even when the input is a bipartite graph. On the positive side, we show that the problem becomes tractable when the input is chordal, and for the special case of quasi-threshold graphs we give a simpler and faster algorithm. In addition, we present sufficient structural conditions on the graph class ${\Pi}$ that imply the NP-hardness of the ${\Pi}$-deletion problem, and show that deletion from general graphs to some well-known subclasses of forests is NP-hard.
2404.10924
Mohammad Hasan
Croix Gyurek and Niloy Talukder and Mohammad Al Hasan
Binder: Hierarchical Concept Representation through Order Embedding of Binary Vectors
null
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
For natural language understanding and generation, embedding concepts using an order-based representation is an essential task. Unlike traditional point vector based representation, an order-based representation imposes geometric constraints on the representation vectors for explicitly capturing various semantic relationships that may exist between a pair of concepts. In existing literature, several approaches on order-based embedding have been proposed, mostly focusing on capturing hierarchical relationships; examples include vectors in Euclidean space, complex, Hyperbolic, order, and Box Embedding. Box embedding creates region-based rich representation of concepts, but along the process it sacrifices simplicity, requiring a custom-made optimization scheme for learning the representation. Hyperbolic embedding improves embedding quality by exploiting the ever-expanding property of Hyperbolic space, but it also suffers from the same fate as box embedding as gradient descent like optimization is not simple in the Hyperbolic space. In this work, we propose Binder, a novel approach for order-based representation. Binder uses binary vectors for embedding, so the embedding vectors are compact with an order of magnitude smaller footprint than other methods. Binder uses a simple and efficient optimization scheme for learning representation vectors with a linear time complexity. Our comprehensive experimental results show that Binder is very accurate, yielding competitive results on the representation task. But Binder stands out from its competitors on the transitive closure link prediction task as it can learn concept embeddings just from the direct edges, whereas all existing order-based approaches rely on the indirect edges.
[ { "created": "Tue, 16 Apr 2024 21:52:55 GMT", "version": "v1" } ]
2024-04-18
[ [ "Gyurek", "Croix", "" ], [ "Talukder", "Niloy", "" ], [ "Hasan", "Mohammad Al", "" ] ]
For natural language understanding and generation, embedding concepts using an order-based representation is an essential task. Unlike traditional point vector based representation, an order-based representation imposes geometric constraints on the representation vectors for explicitly capturing various semantic relationships that may exist between a pair of concepts. In existing literature, several approaches on order-based embedding have been proposed, mostly focusing on capturing hierarchical relationships; examples include vectors in Euclidean space, complex, Hyperbolic, order, and Box Embedding. Box embedding creates region-based rich representation of concepts, but along the process it sacrifices simplicity, requiring a custom-made optimization scheme for learning the representation. Hyperbolic embedding improves embedding quality by exploiting the ever-expanding property of Hyperbolic space, but it also suffers from the same fate as box embedding as gradient descent like optimization is not simple in the Hyperbolic space. In this work, we propose Binder, a novel approach for order-based representation. Binder uses binary vectors for embedding, so the embedding vectors are compact with an order of magnitude smaller footprint than other methods. Binder uses a simple and efficient optimization scheme for learning representation vectors with a linear time complexity. Our comprehensive experimental results show that Binder is very accurate, yielding competitive results on the representation task. But Binder stands out from its competitors on the transitive closure link prediction task as it can learn concept embeddings just from the direct edges, whereas all existing order-based approaches rely on the indirect edges.
1801.08811
Guohua Zhang
Guohua Zhang, Yulin Hu and Qinwei He
Constructing LDPC Codes from Partition and Latin-Style Splicing
7 pages, 2 figures
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A novel method guaranteeing nondecreasing girth is presented for constructing longer low-density parity-check (LDPC) codes from shorter ones. The parity-check matrix of a shorter base code is decomposed into N (N>=2) non-overlapping components with the same size. Then, these components are combined together to form the parity-check matrix of a longer code, according to a given N*N Latin square. To illustrate this method, longer quasi-cyclic (QC) LDPC codes are obtained with girth at least eight and satisfactory performance, via shorter QC-LDPC codes with girth eight but poor performance. The proposed method naturally includes several well-known methods as special cases, but is much more general compared with these existing approaches.
[ { "created": "Fri, 26 Jan 2018 13:59:43 GMT", "version": "v1" } ]
2018-01-29
[ [ "Zhang", "Guohua", "" ], [ "Hu", "Yulin", "" ], [ "He", "Qinwei", "" ] ]
A novel method guaranteeing nondecreasing girth is presented for constructing longer low-density parity-check (LDPC) codes from shorter ones. The parity-check matrix of a shorter base code is decomposed into N (N>=2) non-overlapping components with the same size. Then, these components are combined together to form the parity-check matrix of a longer code, according to a given N*N Latin square. To illustrate this method, longer quasi-cyclic (QC) LDPC codes are obtained with girth at least eight and satisfactory performance, via shorter QC-LDPC codes with girth eight but poor performance. The proposed method naturally includes several well-known methods as special cases, but is much more general compared with these existing approaches.
1910.07931
Siqi Bao
Siqi Bao, Huang He, Fan Wang, Hua Wu and Haifeng Wang
PLATO: Pre-trained Dialogue Generation Model with Discrete Latent Variable
Accepted for publication at ACL2020. First two authors contributed equally
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Pre-training models have been proved effective for a wide range of natural language processing tasks. Inspired by this, we propose a novel dialogue generation pre-training framework to support various kinds of conversations, including chit-chat, knowledge grounded dialogues, and conversational question answering. In this framework, we adopt flexible attention mechanisms to fully leverage the bi-directional context and the uni-directional characteristic of language generation. We also introduce discrete latent variables to tackle the inherent one-to-many mapping problem in response generation. Two reciprocal tasks of response generation and latent act recognition are designed and carried out simultaneously within a shared network. Comprehensive experiments on three publicly available datasets verify the effectiveness and superiority of the proposed framework.
[ { "created": "Thu, 17 Oct 2019 14:09:42 GMT", "version": "v1" }, { "created": "Thu, 7 Nov 2019 13:37:16 GMT", "version": "v2" }, { "created": "Thu, 30 Apr 2020 16:06:37 GMT", "version": "v3" } ]
2020-05-01
[ [ "Bao", "Siqi", "" ], [ "He", "Huang", "" ], [ "Wang", "Fan", "" ], [ "Wu", "Hua", "" ], [ "Wang", "Haifeng", "" ] ]
Pre-training models have been proved effective for a wide range of natural language processing tasks. Inspired by this, we propose a novel dialogue generation pre-training framework to support various kinds of conversations, including chit-chat, knowledge grounded dialogues, and conversational question answering. In this framework, we adopt flexible attention mechanisms to fully leverage the bi-directional context and the uni-directional characteristic of language generation. We also introduce discrete latent variables to tackle the inherent one-to-many mapping problem in response generation. Two reciprocal tasks of response generation and latent act recognition are designed and carried out simultaneously within a shared network. Comprehensive experiments on three publicly available datasets verify the effectiveness and superiority of the proposed framework.
0802.1296
Dusko Pavlovic
Dusko Pavlovic
On quantum statistics in data analysis
7 pages, Quantum Interaction 2008 (Oxford, April 2008) v3: added two diagrams, changed some wordings
null
null
null
cs.IR math.CT quant-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Originally, quantum probability theory was developed to analyze statistical phenomena in quantum systems, where classical probability theory does not apply, because the lattice of measurable sets is not necessarily distributive. On the other hand, it is well known that the lattices of concepts, that arise in data analysis, are in general also non-distributive, albeit for completely different reasons. In his recent book, van Rijsbergen argues that many of the logical tools developed for quantum systems are also suitable for applications in information retrieval. I explore the mathematical support for this idea on an abstract vector space model, covering several forms of data analysis (information retrieval, data mining, collaborative filtering, formal concept analysis...), and roughly based on an idea from categorical quantum mechanics. It turns out that quantum (i.e., noncommutative) probability distributions arise already in this rudimentary mathematical framework. We show that a Bell-type inequality must be satisfied by the standard similarity measures, if they are used for preference predictions. The fact that already a very general, abstract version of the vector space model yields simple counterexamples for such inequalities seems to be an indicator of a genuine need for quantum statistics in data analysis.
[ { "created": "Sun, 10 Feb 2008 01:42:31 GMT", "version": "v1" }, { "created": "Fri, 22 Feb 2008 12:08:53 GMT", "version": "v2" }, { "created": "Tue, 13 May 2008 18:46:10 GMT", "version": "v3" } ]
2009-04-18
[ [ "Pavlovic", "Dusko", "" ] ]
Originally, quantum probability theory was developed to analyze statistical phenomena in quantum systems, where classical probability theory does not apply, because the lattice of measurable sets is not necessarily distributive. On the other hand, it is well known that the lattices of concepts, that arise in data analysis, are in general also non-distributive, albeit for completely different reasons. In his recent book, van Rijsbergen argues that many of the logical tools developed for quantum systems are also suitable for applications in information retrieval. I explore the mathematical support for this idea on an abstract vector space model, covering several forms of data analysis (information retrieval, data mining, collaborative filtering, formal concept analysis...), and roughly based on an idea from categorical quantum mechanics. It turns out that quantum (i.e., noncommutative) probability distributions arise already in this rudimentary mathematical framework. We show that a Bell-type inequality must be satisfied by the standard similarity measures, if they are used for preference predictions. The fact that already a very general, abstract version of the vector space model yields simple counterexamples for such inequalities seems to be an indicator of a genuine need for quantum statistics in data analysis.
1909.10584
Alexandros Psomas
Shaddin Dughmi, Rad Niazadeh, Alexandros Psomas, S. Matthew Weinberg
Persuasion and Incentives Through the Lens of Duality
null
null
null
null
cs.GT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Lagrangian duality underlies both classical and modern mechanism design. In particular, the dual perspective often permits simple and detail-free characterizations of optimal and approximately optimal mechanisms. This paper applies this same methodology to a close cousin of traditional mechanism design, one which shares conceptual and technical elements with its more mature relative: the burgeoning field of persuasion. The dual perspective permits us to analyze optimal persuasion schemes both in settings which have been analyzed in prior work, as well as for natural generalizations which we are the first to explore in depth. Most notably, we permit combining persuasion policies with payments, which serve to augment the persuasion power of the scheme. In both single and multi-receiver settings, as well as under a variety of constraints on payments, we employ duality to obtain structural insights, as well as tractable and simple characterizations of optimal policies.
[ { "created": "Mon, 23 Sep 2019 19:27:26 GMT", "version": "v1" } ]
2019-09-25
[ [ "Dughmi", "Shaddin", "" ], [ "Niazadeh", "Rad", "" ], [ "Psomas", "Alexandros", "" ], [ "Weinberg", "S. Matthew", "" ] ]
Lagrangian duality underlies both classical and modern mechanism design. In particular, the dual perspective often permits simple and detail-free characterizations of optimal and approximately optimal mechanisms. This paper applies this same methodology to a close cousin of traditional mechanism design, one which shares conceptual and technical elements with its more mature relative: the burgeoning field of persuasion. The dual perspective permits us to analyze optimal persuasion schemes both in settings which have been analyzed in prior work, as well as for natural generalizations which we are the first to explore in depth. Most notably, we permit combining persuasion policies with payments, which serve to augment the persuasion power of the scheme. In both single and multi-receiver settings, as well as under a variety of constraints on payments, we employ duality to obtain structural insights, as well as tractable and simple characterizations of optimal policies.
2312.15626
Shusaku Egami
Shusaku Egami, Takanori Ugai, Masateru Oota, Kyoumoto Matsushita, Takahiro Kawamura, Kouji Kozaki, Ken Fukuda
RDF-star2Vec: RDF-star Graph Embeddings for Data Mining
13 pages, 6 figures, and this paper has been accepted by IEEE Access
IEEE Access, Volume 11, pp.142030-142042, 2023
10.1109/ACCESS.2023.3341029
null
cs.AI cs.CL cs.IR cs.LG
http://creativecommons.org/licenses/by/4.0/
Knowledge Graphs (KGs) such as Resource Description Framework (RDF) data represent relationships between various entities through the structure of triples (<subject, predicate, object>). Knowledge graph embedding (KGE) is crucial in machine learning applications, specifically in node classification and link prediction tasks. KGE remains a vital research topic within the semantic web community. RDF-star introduces the concept of a quoted triple (QT), a specific form of triple employed either as the subject or object within another triple. Moreover, RDF-star permits a QT to act as compositional entities within another QT, thereby enabling the representation of recursive, hyper-relational KGs with nested structures. However, existing KGE models fail to adequately learn the semantics of QTs and entities, primarily because they do not account for RDF-star graphs containing multi-leveled nested QTs and QT-QT relationships. This study introduces RDF-star2Vec, a novel KGE model specifically designed for RDF-star graphs. RDF-star2Vec introduces graph walk techniques that enable probabilistic transitions between a QT and its compositional entities. Feature vectors for QTs, entities, and relations are derived from generated sequences through the structured skip-gram model. Additionally, we provide a dataset and a benchmarking framework for data mining tasks focused on complex RDF-star graphs. Evaluative experiments demonstrated that RDF-star2Vec yielded superior performance compared to recent extensions of RDF2Vec in various tasks including classification, clustering, entity relatedness, and QT similarity.
[ { "created": "Mon, 25 Dec 2023 06:32:14 GMT", "version": "v1" } ]
2023-12-27
[ [ "Egami", "Shusaku", "" ], [ "Ugai", "Takanori", "" ], [ "Oota", "Masateru", "" ], [ "Matsushita", "Kyoumoto", "" ], [ "Kawamura", "Takahiro", "" ], [ "Kozaki", "Kouji", "" ], [ "Fukuda", "Ken", "" ] ]
Knowledge Graphs (KGs) such as Resource Description Framework (RDF) data represent relationships between various entities through the structure of triples (<subject, predicate, object>). Knowledge graph embedding (KGE) is crucial in machine learning applications, specifically in node classification and link prediction tasks. KGE remains a vital research topic within the semantic web community. RDF-star introduces the concept of a quoted triple (QT), a specific form of triple employed either as the subject or object within another triple. Moreover, RDF-star permits a QT to act as compositional entities within another QT, thereby enabling the representation of recursive, hyper-relational KGs with nested structures. However, existing KGE models fail to adequately learn the semantics of QTs and entities, primarily because they do not account for RDF-star graphs containing multi-leveled nested QTs and QT-QT relationships. This study introduces RDF-star2Vec, a novel KGE model specifically designed for RDF-star graphs. RDF-star2Vec introduces graph walk techniques that enable probabilistic transitions between a QT and its compositional entities. Feature vectors for QTs, entities, and relations are derived from generated sequences through the structured skip-gram model. Additionally, we provide a dataset and a benchmarking framework for data mining tasks focused on complex RDF-star graphs. Evaluative experiments demonstrated that RDF-star2Vec yielded superior performance compared to recent extensions of RDF2Vec in various tasks including classification, clustering, entity relatedness, and QT similarity.
2209.15425
Zhaokun Zhou
Zhaokun Zhou, Yuesheng Zhu, Chao He, Yaowei Wang, Shuicheng Yan, Yonghong Tian, Li Yuan
Spikformer: When Spiking Neural Network Meets Transformer
null
null
null
null
cs.NE cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider two biologically plausible structures, the Spiking Neural Network (SNN) and the self-attention mechanism. The former offers an energy-efficient and event-driven paradigm for deep learning, while the latter has the ability to capture feature dependencies, enabling Transformer to achieve good performance. It is intuitively promising to explore the marriage between them. In this paper, we consider leveraging both self-attention capability and biological properties of SNNs, and propose a novel Spiking Self Attention (SSA) as well as a powerful framework, named Spiking Transformer (Spikformer). The SSA mechanism in Spikformer models the sparse visual feature by using spike-form Query, Key, and Value without softmax. Since its computation is sparse and avoids multiplication, SSA is efficient and has low computational energy consumption. It is shown that Spikformer with SSA can outperform the state-of-the-art SNNs-like frameworks in image classification on both neuromorphic and static datasets. Spikformer (66.3M parameters) with comparable size to SEW-ResNet-152 (60.2M,69.26%) can achieve 74.81% top1 accuracy on ImageNet using 4 time steps, which is the state-of-the-art in directly trained SNNs models.
[ { "created": "Thu, 29 Sep 2022 14:16:49 GMT", "version": "v1" }, { "created": "Tue, 22 Nov 2022 12:45:05 GMT", "version": "v2" } ]
2022-11-23
[ [ "Zhou", "Zhaokun", "" ], [ "Zhu", "Yuesheng", "" ], [ "He", "Chao", "" ], [ "Wang", "Yaowei", "" ], [ "Yan", "Shuicheng", "" ], [ "Tian", "Yonghong", "" ], [ "Yuan", "Li", "" ] ]
We consider two biologically plausible structures, the Spiking Neural Network (SNN) and the self-attention mechanism. The former offers an energy-efficient and event-driven paradigm for deep learning, while the latter has the ability to capture feature dependencies, enabling Transformer to achieve good performance. It is intuitively promising to explore the marriage between them. In this paper, we consider leveraging both self-attention capability and biological properties of SNNs, and propose a novel Spiking Self Attention (SSA) as well as a powerful framework, named Spiking Transformer (Spikformer). The SSA mechanism in Spikformer models the sparse visual feature by using spike-form Query, Key, and Value without softmax. Since its computation is sparse and avoids multiplication, SSA is efficient and has low computational energy consumption. It is shown that Spikformer with SSA can outperform the state-of-the-art SNNs-like frameworks in image classification on both neuromorphic and static datasets. Spikformer (66.3M parameters) with comparable size to SEW-ResNet-152 (60.2M,69.26%) can achieve 74.81% top1 accuracy on ImageNet using 4 time steps, which is the state-of-the-art in directly trained SNNs models.
2008.04115
Hyeonseong Jeon
Hyeonseong Jeon, Youngoh Bang, Junyaup Kim, and Simon S. Woo
T-GD: Transferable GAN-generated Images Detection Framework
ICML 2020
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent advancements in Generative Adversarial Networks (GANs) enable the generation of highly realistic images, raising concerns about their misuse for malicious purposes. Detecting these GAN-generated images (GAN-images) becomes increasingly challenging due to the significant reduction of underlying artifacts and specific patterns. The absence of such traces can hinder detection algorithms from identifying GAN-images and transferring knowledge to identify other types of GAN-images as well. In this work, we present the Transferable GAN-images Detection framework T-GD, a robust transferable framework for an effective detection of GAN-images. T-GD is composed of a teacher and a student model that can iteratively teach and evaluate each other to improve the detection performance. First, we train the teacher model on the source dataset and use it as a starting point for learning the target dataset. To train the student model, we inject noise by mixing up the source and target datasets, while constraining the weight variation to preserve the starting point. Our approach is a self-training method, but distinguishes itself from prior approaches by focusing on improving the transferability of GAN-image detection. T-GD achieves high performance on the source dataset by overcoming catastrophic forgetting and effectively detecting state-of-the-art GAN-images with only a small volume of data without any metadata information.
[ { "created": "Mon, 10 Aug 2020 13:20:19 GMT", "version": "v1" } ]
2020-08-11
[ [ "Jeon", "Hyeonseong", "" ], [ "Bang", "Youngoh", "" ], [ "Kim", "Junyaup", "" ], [ "Woo", "Simon S.", "" ] ]
Recent advancements in Generative Adversarial Networks (GANs) enable the generation of highly realistic images, raising concerns about their misuse for malicious purposes. Detecting these GAN-generated images (GAN-images) becomes increasingly challenging due to the significant reduction of underlying artifacts and specific patterns. The absence of such traces can hinder detection algorithms from identifying GAN-images and transferring knowledge to identify other types of GAN-images as well. In this work, we present the Transferable GAN-images Detection framework T-GD, a robust transferable framework for an effective detection of GAN-images. T-GD is composed of a teacher and a student model that can iteratively teach and evaluate each other to improve the detection performance. First, we train the teacher model on the source dataset and use it as a starting point for learning the target dataset. To train the student model, we inject noise by mixing up the source and target datasets, while constraining the weight variation to preserve the starting point. Our approach is a self-training method, but distinguishes itself from prior approaches by focusing on improving the transferability of GAN-image detection. T-GD achieves high performance on the source dataset by overcoming catastrophic forgetting and effectively detecting state-of-the-art GAN-images with only a small volume of data without any metadata information.
1911.06055
Fernando Morales
Fernando A. Morales
The RaPID-OMEGA system: Room and Proctor Intelligent Decider for large scale tests programming
21 pages, 12 tables
Yugolav Journal of Operations Research, 2020
10.2298/YJOR191115019M
null
cs.MS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present the mathematical modeling for the problem of choosing rooms and proctoring crews for massive tests, together with its implementation as the open-box system RaPID-Omega. The mathematical model is a binary integer programming problem: a combination of the 0-1 Knapsack problem and the job-assignment problem. The model makes decisions according the following criteria in order of priority: minimization of labor-hours, maximization of equity in the distribution of duties and maximization of the proctoring quality. The software is a digital solution for the aforementioned problem, which is a common need in educational institutions offering large, coordinated, lower-division courses. The system can be downloaded from \url{https://sites.google.com/a/unal.edu.co/fernando-a-morales-j/home/research/software}
[ { "created": "Thu, 14 Nov 2019 12:09:31 GMT", "version": "v1" }, { "created": "Mon, 18 Nov 2019 18:59:38 GMT", "version": "v2" }, { "created": "Thu, 9 Apr 2020 21:22:17 GMT", "version": "v3" }, { "created": "Thu, 16 Jul 2020 14:05:28 GMT", "version": "v4" }, { "created": "Fri, 17 Jul 2020 14:11:37 GMT", "version": "v5" } ]
2020-08-26
[ [ "Morales", "Fernando A.", "" ] ]
We present the mathematical modeling for the problem of choosing rooms and proctoring crews for massive tests, together with its implementation as the open-box system RaPID-Omega. The mathematical model is a binary integer programming problem: a combination of the 0-1 Knapsack problem and the job-assignment problem. The model makes decisions according the following criteria in order of priority: minimization of labor-hours, maximization of equity in the distribution of duties and maximization of the proctoring quality. The software is a digital solution for the aforementioned problem, which is a common need in educational institutions offering large, coordinated, lower-division courses. The system can be downloaded from \url{https://sites.google.com/a/unal.edu.co/fernando-a-morales-j/home/research/software}
1109.0915
Claudia Picardi
Daniele Mundici and Claudia Picardi
Drawing Sound Conclusions from Unsound Premises
null
null
null
null
cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Given sets $\Phi_1=\{\phi_{11},...,\phi_{1u(1)}\}, ...,\Phi_{z}=\{\phi_{z1},...,\phi_{zu(z)}\}$ of boolean formulas, a formula $\omega$ follows from the conjunction $\bigwedge\Phi_i= \bigwedge \phi_{ij}$ iff $\neg \omega\wedge \bigwedge_{i=1}^z \Phi_i$ is unsatisfiable. Now assume that, given integers $0\leq e_i < u(i)$, we must check if $\neg \omega\wedge \bigwedge_{i=1}^z \Phi'_i$ remains unsatisfiable, where $\Phi'_i\subseteq \Phi_i$ is obtained by deleting $\,\,e_{i}$ arbitrarily chosen formulas of $\Phi_i$, for each $i=1,...,z.$ Intuitively, does $\omega$ {\it stably} follow, after removing $e_i$ random formulas from each $\Phi_i$? We construct a quadratic reduction of this problem to the consequence problem in infinite-valued \luk\ logic \L$_\infty$. In this way we obtain a self-contained proof that the \L$_\infty$-consequence problem is coNP-complete.
[ { "created": "Mon, 5 Sep 2011 14:37:27 GMT", "version": "v1" } ]
2011-09-06
[ [ "Mundici", "Daniele", "" ], [ "Picardi", "Claudia", "" ] ]
Given sets $\Phi_1=\{\phi_{11},...,\phi_{1u(1)}\}, ...,\Phi_{z}=\{\phi_{z1},...,\phi_{zu(z)}\}$ of boolean formulas, a formula $\omega$ follows from the conjunction $\bigwedge\Phi_i= \bigwedge \phi_{ij}$ iff $\neg \omega\wedge \bigwedge_{i=1}^z \Phi_i$ is unsatisfiable. Now assume that, given integers $0\leq e_i < u(i)$, we must check if $\neg \omega\wedge \bigwedge_{i=1}^z \Phi'_i$ remains unsatisfiable, where $\Phi'_i\subseteq \Phi_i$ is obtained by deleting $\,\,e_{i}$ arbitrarily chosen formulas of $\Phi_i$, for each $i=1,...,z.$ Intuitively, does $\omega$ {\it stably} follow, after removing $e_i$ random formulas from each $\Phi_i$? We construct a quadratic reduction of this problem to the consequence problem in infinite-valued \luk\ logic \L$_\infty$. In this way we obtain a self-contained proof that the \L$_\infty$-consequence problem is coNP-complete.
1508.03898
EPTCS
Julien Signoles (CEA LIST, Software Security Lab)
Software Architecture of Code Analysis Frameworks Matters: The Frama-C Example
In Proceedings F-IDE 2015, arXiv:1508.03388
EPTCS 187, 2015, pp. 86-96
10.4204/EPTCS.187.7
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Implementing large software, as software analyzers which aim to be used in industrial settings, requires a well-engineered software architecture in order to ease its daily development and its maintenance process during its lifecycle. If the analyzer is not only a single tool, but an open extensible collaborative framework in which external developers may develop plug-ins collaborating with each other, such a well designed architecture even becomes more important. In this experience report, we explain difficulties of developing and maintaining open extensible collaborative analysis frameworks, through the example of Frama-C, a platform dedicated to the analysis of code written in C. We also present the new upcoming software architecture of Frama-C and how it aims to solve some of these issues.
[ { "created": "Mon, 17 Aug 2015 01:37:20 GMT", "version": "v1" } ]
2015-08-18
[ [ "Signoles", "Julien", "", "CEA LIST, Software Security Lab" ] ]
Implementing large software, as software analyzers which aim to be used in industrial settings, requires a well-engineered software architecture in order to ease its daily development and its maintenance process during its lifecycle. If the analyzer is not only a single tool, but an open extensible collaborative framework in which external developers may develop plug-ins collaborating with each other, such a well designed architecture even becomes more important. In this experience report, we explain difficulties of developing and maintaining open extensible collaborative analysis frameworks, through the example of Frama-C, a platform dedicated to the analysis of code written in C. We also present the new upcoming software architecture of Frama-C and how it aims to solve some of these issues.
2402.06669
Luis Javier Garc\'ia Villalba
Raquel Ramos L\'opez, Ana Lucila Sandoval Orozco, Luis Javier Garc\'ia Villalba
Compression effects and scene details on the source camera identification of digital videos
null
Expert Systems with Applications, Vol. 170, pp. 114515, May 2021
10.1016/j.eswa.2020.114515
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The continuous growth of technologies like 4G or 5G has led to a massive use of mobile devices such as smartphones and tablets. This phenomenon, combined with the fact that people use mobile phones for a longer period of time, results in mobile phones becoming the main source of creation of visual information. However, its reliability as a true representation of reality cannot be taken for granted due to the constant increase in editing software. This makes it easier to alter original content without leaving a noticeable trace in the modification. Therefore, it is essential to introduce forensic analysis mechanisms to guarantee the authenticity or integrity of a certain digital video, particularly if it may be considered as evidence in legal proceedings. This paper explains the branch of multimedia forensic analysis that allows to determine the identification of the source of acquisition of a certain video by exploiting the unique traces left by the camera sensor of the mobile device in visual content. To do this, a technique that performs the identification of the source of acquisition of digital videos from mobile devices is presented. It involves 3 stages: (1) Extraction of the sensor fingerprint by applying the block-based technique. (2) Filtering the strong component of the PRNU signal to improve the quality of the sensor fingerprint. (3) Classification of digital videos in an open scenario, that is, where the forensic analyst does not need to have access to the device that recorded the video to find out the origin of the video. The main contribution of the proposed technique eliminates the details of the scene to improve the PRNU fingerprint. It should be noted that these techniques are applied to digital images and not to digital videos.
[ { "created": "Wed, 7 Feb 2024 09:14:18 GMT", "version": "v1" } ]
2024-02-14
[ [ "López", "Raquel Ramos", "" ], [ "Orozco", "Ana Lucila Sandoval", "" ], [ "Villalba", "Luis Javier García", "" ] ]
The continuous growth of technologies like 4G or 5G has led to a massive use of mobile devices such as smartphones and tablets. This phenomenon, combined with the fact that people use mobile phones for a longer period of time, results in mobile phones becoming the main source of creation of visual information. However, its reliability as a true representation of reality cannot be taken for granted due to the constant increase in editing software. This makes it easier to alter original content without leaving a noticeable trace in the modification. Therefore, it is essential to introduce forensic analysis mechanisms to guarantee the authenticity or integrity of a certain digital video, particularly if it may be considered as evidence in legal proceedings. This paper explains the branch of multimedia forensic analysis that allows to determine the identification of the source of acquisition of a certain video by exploiting the unique traces left by the camera sensor of the mobile device in visual content. To do this, a technique that performs the identification of the source of acquisition of digital videos from mobile devices is presented. It involves 3 stages: (1) Extraction of the sensor fingerprint by applying the block-based technique. (2) Filtering the strong component of the PRNU signal to improve the quality of the sensor fingerprint. (3) Classification of digital videos in an open scenario, that is, where the forensic analyst does not need to have access to the device that recorded the video to find out the origin of the video. The main contribution of the proposed technique eliminates the details of the scene to improve the PRNU fingerprint. It should be noted that these techniques are applied to digital images and not to digital videos.
2407.19928
Alfio Lazzaro
Alfio Lazzaro
Enabling Message Passing Interface Containers on the LUMI Supercomputer
13 pages, presented at the Nordic e-Infrastructure Collaboration Conference (NeIC) 2024, 27-19 May 2024, Tallinn, Estonia
null
null
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Containers represent a convenient way of packing applications with dependencies for easy user-level installation and productivity. When running on supercomputers, it becomes crucial to optimize the containers to exploit the performance optimizations provided by the system vendors. In this paper, we discuss an approach we have developed for deploying containerized applications on the LUMI supercomputer, specifically for running applications based on Message Passing Interface (MPI) parallelization. We show how users can build and run containers and get the expected performance. The proposed MPI containers can be provided on LUMI so that users can use them as base images. Although we only refer to the LUMI supercomputer, similar concepts can be applied to the case of other supercomputers.
[ { "created": "Mon, 29 Jul 2024 12:02:00 GMT", "version": "v1" } ]
2024-07-30
[ [ "Lazzaro", "Alfio", "" ] ]
Containers represent a convenient way of packing applications with dependencies for easy user-level installation and productivity. When running on supercomputers, it becomes crucial to optimize the containers to exploit the performance optimizations provided by the system vendors. In this paper, we discuss an approach we have developed for deploying containerized applications on the LUMI supercomputer, specifically for running applications based on Message Passing Interface (MPI) parallelization. We show how users can build and run containers and get the expected performance. The proposed MPI containers can be provided on LUMI so that users can use them as base images. Although we only refer to the LUMI supercomputer, similar concepts can be applied to the case of other supercomputers.
0907.4994
R Doomun
R.K.Pateriya, J.L.Rana, S.C. Shrivastava, Jaideep Patel
A Proposed Algorithm to improve security & Efficiency of SSL-TLS servers using Batch RSA decryption
5 pages, International Journal of Computer Science and Information Security, IJCSIS, Impact Factor 0.423
IJCSIS July 2009, Volume 3, ISSN 1947 5500
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Today, Internet becomes the essential part of our lives. Over 90 percent of the ecommerce is developed on the Internet. A security algorithm became very necessary for producer client transactions assurance and the financial applications safety. The rsa algorithm applicability derives from algorithm properties like confidentiality, safe authentication, data safety and integrity on the internet. Thus, this kind of networks can have a more easy utilization by practical accessing from short, medium, even long distance and from different public places. Rsa encryption in the client side is relatively cheap, whereas, the corresponding decryption in the server side is expensive because its private exponent is much larger. Thus ssl tls servers become swamped to perform public key decryption operations when the simultaneous requests increase quickly .The batch rsa method is useful for such highly loaded web server .In our proposed algorithm by reducing the response time and clients tolerable waiting time an improvement in performance of ssl tls servers can be done. The proposed algorithm should provide the reasonable response time and optimizes server performance significantly. At Encryption side, to withstand many attacks like brute force attack, subtle attack etc. we also adapted a parameter generation method, which sieve all the parameters strictly, and filter out every insecure parameter.
[ { "created": "Tue, 28 Jul 2009 20:17:04 GMT", "version": "v1" } ]
2009-07-30
[ [ "Pateriya", "R. K.", "" ], [ "Rana", "J. L.", "" ], [ "Shrivastava", "S. C.", "" ], [ "Patel", "Jaideep", "" ] ]
Today, Internet becomes the essential part of our lives. Over 90 percent of the ecommerce is developed on the Internet. A security algorithm became very necessary for producer client transactions assurance and the financial applications safety. The rsa algorithm applicability derives from algorithm properties like confidentiality, safe authentication, data safety and integrity on the internet. Thus, this kind of networks can have a more easy utilization by practical accessing from short, medium, even long distance and from different public places. Rsa encryption in the client side is relatively cheap, whereas, the corresponding decryption in the server side is expensive because its private exponent is much larger. Thus ssl tls servers become swamped to perform public key decryption operations when the simultaneous requests increase quickly .The batch rsa method is useful for such highly loaded web server .In our proposed algorithm by reducing the response time and clients tolerable waiting time an improvement in performance of ssl tls servers can be done. The proposed algorithm should provide the reasonable response time and optimizes server performance significantly. At Encryption side, to withstand many attacks like brute force attack, subtle attack etc. we also adapted a parameter generation method, which sieve all the parameters strictly, and filter out every insecure parameter.
1911.00922
Yuhao Su
Yuhao Su and Jie Ding
Variable Grouping Based Bayesian Additive Regression Tree
5 pages, 3 tables
null
null
null
cs.LG eess.SP stat.ME stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Using ensemble methods for regression has been a large success in obtaining high-accuracy prediction. Examples are Bagging, Random forest, Boosting, BART (Bayesian additive regression tree), and their variants. In this paper, we propose a new perspective named variable grouping to enhance the predictive performance. The main idea is to seek for potential grouping of variables in such way that there is no nonlinear interaction term between variables of different groups. Given a sum-of-learner model, each learner will only be responsible for one group of variables, which would be more efficient in modeling nonlinear interactions. We propose a two-stage method named variable grouping based Bayesian additive regression tree (GBART) with a well-developed python package gbart available. The first stage is to search for potential interactions and an appropriate grouping of variables. The second stage is to build a final model based on the discovered groups. Experiments on synthetic and real data show that the proposed method can perform significantly better than classical approaches.
[ { "created": "Sun, 3 Nov 2019 16:08:56 GMT", "version": "v1" }, { "created": "Tue, 5 Nov 2019 02:16:02 GMT", "version": "v2" } ]
2019-11-06
[ [ "Su", "Yuhao", "" ], [ "Ding", "Jie", "" ] ]
Using ensemble methods for regression has been a large success in obtaining high-accuracy prediction. Examples are Bagging, Random forest, Boosting, BART (Bayesian additive regression tree), and their variants. In this paper, we propose a new perspective named variable grouping to enhance the predictive performance. The main idea is to seek for potential grouping of variables in such way that there is no nonlinear interaction term between variables of different groups. Given a sum-of-learner model, each learner will only be responsible for one group of variables, which would be more efficient in modeling nonlinear interactions. We propose a two-stage method named variable grouping based Bayesian additive regression tree (GBART) with a well-developed python package gbart available. The first stage is to search for potential interactions and an appropriate grouping of variables. The second stage is to build a final model based on the discovered groups. Experiments on synthetic and real data show that the proposed method can perform significantly better than classical approaches.
2306.03535
Nianlong Gu
Nianlong Gu, Richard H.R. Hahnloser
SciLit: A Platform for Joint Scientific Literature Discovery, Summarization and Citation Generation
Accepted at ACL 2023 System Demonstration
null
10.18653/v1/2023.acl-demo.22
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Scientific writing involves retrieving, summarizing, and citing relevant papers, which can be time-consuming processes in large and rapidly evolving fields. By making these processes inter-operable, natural language processing (NLP) provides opportunities for creating end-to-end assistive writing tools. We propose SciLit, a pipeline that automatically recommends relevant papers, extracts highlights, and suggests a reference sentence as a citation of a paper, taking into consideration the user-provided context and keywords. SciLit efficiently recommends papers from large databases of hundreds of millions of papers using a two-stage pre-fetching and re-ranking literature search system that flexibly deals with addition and removal of a paper database. We provide a convenient user interface that displays the recommended papers as extractive summaries and that offers abstractively-generated citing sentences which are aligned with the provided context and which mention the chosen keyword(s). Our assistive tool for literature discovery and scientific writing is available at https://scilit.vercel.app
[ { "created": "Tue, 6 Jun 2023 09:34:45 GMT", "version": "v1" }, { "created": "Mon, 6 Nov 2023 15:53:23 GMT", "version": "v2" } ]
2023-11-07
[ [ "Gu", "Nianlong", "" ], [ "Hahnloser", "Richard H. R.", "" ] ]
Scientific writing involves retrieving, summarizing, and citing relevant papers, which can be time-consuming processes in large and rapidly evolving fields. By making these processes inter-operable, natural language processing (NLP) provides opportunities for creating end-to-end assistive writing tools. We propose SciLit, a pipeline that automatically recommends relevant papers, extracts highlights, and suggests a reference sentence as a citation of a paper, taking into consideration the user-provided context and keywords. SciLit efficiently recommends papers from large databases of hundreds of millions of papers using a two-stage pre-fetching and re-ranking literature search system that flexibly deals with addition and removal of a paper database. We provide a convenient user interface that displays the recommended papers as extractive summaries and that offers abstractively-generated citing sentences which are aligned with the provided context and which mention the chosen keyword(s). Our assistive tool for literature discovery and scientific writing is available at https://scilit.vercel.app
0811.0731
Romain Couillet
Romain Couillet, Merouane Debbah
Cognitive OFDM network sensing: a free probability approach
12 pages, 10 figures, 2 tables
null
null
null
cs.IT cs.AI math.IT math.PR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, a practical power detection scheme for OFDM terminals, based on recent free probability tools, is proposed. The objective is for the receiving terminal to determine the transmission power and the number of the surrounding base stations in the network. However, thesystem dimensions of the network model turn energy detection into an under-determined problem. The focus of this paper is then twofold: (i) discuss the maximum amount of information that an OFDM terminal can gather from the surrounding base stations in the network, (ii) propose a practical solution for blind cell detection using the free deconvolution tool. The efficiency of this solution is measured through simulations, which show better performance than the classical power detection methods.
[ { "created": "Wed, 5 Nov 2008 14:34:23 GMT", "version": "v1" }, { "created": "Tue, 25 Nov 2008 19:05:04 GMT", "version": "v2" } ]
2008-11-25
[ [ "Couillet", "Romain", "" ], [ "Debbah", "Merouane", "" ] ]
In this paper, a practical power detection scheme for OFDM terminals, based on recent free probability tools, is proposed. The objective is for the receiving terminal to determine the transmission power and the number of the surrounding base stations in the network. However, thesystem dimensions of the network model turn energy detection into an under-determined problem. The focus of this paper is then twofold: (i) discuss the maximum amount of information that an OFDM terminal can gather from the surrounding base stations in the network, (ii) propose a practical solution for blind cell detection using the free deconvolution tool. The efficiency of this solution is measured through simulations, which show better performance than the classical power detection methods.
2405.05347
Juho Leinonen
Charles Koutcheme, Nicola Dainese, Sami Sarsa, Juho Leinonen, Arto Hellas, Paul Denny
Benchmarking Educational Program Repair
15 pages, 2 figures, 3 tables. Non-archival report presented at the NeurIPS'23 Workshop on Generative AI for Education (GAIED)
null
null
null
cs.SE cs.AI cs.CL cs.CY
http://creativecommons.org/licenses/by/4.0/
The emergence of large language models (LLMs) has sparked enormous interest due to their potential application across a range of educational tasks. For example, recent work in programming education has used LLMs to generate learning resources, improve error messages, and provide feedback on code. However, one factor that limits progress within the field is that much of the research uses bespoke datasets and different evaluation metrics, making direct comparisons between results unreliable. Thus, there is a pressing need for standardization and benchmarks that facilitate the equitable comparison of competing approaches. One task where LLMs show great promise is program repair, which can be used to provide debugging support and next-step hints to students. In this article, we propose a novel educational program repair benchmark. We curate two high-quality publicly available programming datasets, present a unified evaluation procedure introducing a novel evaluation metric rouge@k for approximating the quality of repairs, and evaluate a set of five recent models to establish baseline performance.
[ { "created": "Wed, 8 May 2024 18:23:59 GMT", "version": "v1" } ]
2024-05-10
[ [ "Koutcheme", "Charles", "" ], [ "Dainese", "Nicola", "" ], [ "Sarsa", "Sami", "" ], [ "Leinonen", "Juho", "" ], [ "Hellas", "Arto", "" ], [ "Denny", "Paul", "" ] ]
The emergence of large language models (LLMs) has sparked enormous interest due to their potential application across a range of educational tasks. For example, recent work in programming education has used LLMs to generate learning resources, improve error messages, and provide feedback on code. However, one factor that limits progress within the field is that much of the research uses bespoke datasets and different evaluation metrics, making direct comparisons between results unreliable. Thus, there is a pressing need for standardization and benchmarks that facilitate the equitable comparison of competing approaches. One task where LLMs show great promise is program repair, which can be used to provide debugging support and next-step hints to students. In this article, we propose a novel educational program repair benchmark. We curate two high-quality publicly available programming datasets, present a unified evaluation procedure introducing a novel evaluation metric rouge@k for approximating the quality of repairs, and evaluate a set of five recent models to establish baseline performance.
2305.18119
Yibo Guo
Yibo Guo and Mingxin Li and Jingting Zong and Mingliang Xu
Emergent Incident Response for Unmanned Warehouses with Multi-agent Systems*
13 pages, 7 figures
null
null
null
cs.RO cs.MA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Unmanned warehouses are an important part of logistics, and improving their operational efficiency can effectively enhance service efficiency. However, due to the complexity of unmanned warehouse systems and their susceptibility to errors, incidents may occur during their operation, most often in inbound and outbound operations, which can decrease operational efficiency. Hence it is crucial to to improve the response to such incidents. This paper proposes a collaborative optimization algorithm for emergent incident response based on Safe-MADDPG. To meet safety requirements during emergent incident response, we investigated the intrinsic hidden relationships between various factors. By obtaining constraint information of agents during the emergent incident response process and of the dynamic environment of unmanned warehouses on agents, the algorithm reduces safety risks and avoids the occurrence of chain accidents; this enables an unmanned system to complete emergent incident response tasks and achieve its optimization objectives: (1) minimizing the losses caused by emergent incidents; and (2) maximizing the operational efficiency of inbound and outbound operations during the response process. A series of experiments conducted in a simulated unmanned warehouse scenario demonstrate the effectiveness of the proposed method.
[ { "created": "Mon, 29 May 2023 14:30:35 GMT", "version": "v1" } ]
2023-05-30
[ [ "Guo", "Yibo", "" ], [ "Li", "Mingxin", "" ], [ "Zong", "Jingting", "" ], [ "Xu", "Mingliang", "" ] ]
Unmanned warehouses are an important part of logistics, and improving their operational efficiency can effectively enhance service efficiency. However, due to the complexity of unmanned warehouse systems and their susceptibility to errors, incidents may occur during their operation, most often in inbound and outbound operations, which can decrease operational efficiency. Hence it is crucial to to improve the response to such incidents. This paper proposes a collaborative optimization algorithm for emergent incident response based on Safe-MADDPG. To meet safety requirements during emergent incident response, we investigated the intrinsic hidden relationships between various factors. By obtaining constraint information of agents during the emergent incident response process and of the dynamic environment of unmanned warehouses on agents, the algorithm reduces safety risks and avoids the occurrence of chain accidents; this enables an unmanned system to complete emergent incident response tasks and achieve its optimization objectives: (1) minimizing the losses caused by emergent incidents; and (2) maximizing the operational efficiency of inbound and outbound operations during the response process. A series of experiments conducted in a simulated unmanned warehouse scenario demonstrate the effectiveness of the proposed method.
2207.11088
Xin Zhou Dr.
Xin Zhou, Donghui Lin, Yong Liu, Chunyan Miao
Layer-refined Graph Convolutional Networks for Recommendation
Accepted as a research track paper in ICDE 2023
null
null
null
cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recommendation models utilizing Graph Convolutional Networks (GCNs) have achieved state-of-the-art performance, as they can integrate both the node information and the topological structure of the user-item interaction graph. However, these GCN-based recommendation models not only suffer from over-smoothing when stacking too many layers but also bear performance degeneration resulting from the existence of noise in user-item interactions. In this paper, we first identify a recommendation dilemma of over-smoothing and solution collapsing in current GCN-based models. Specifically, these models usually aggregate all layer embeddings for node updating and achieve their best recommendation performance within a few layers because of over-smoothing. Conversely, if we place learnable weights on layer embeddings for node updating, the weight space will always collapse to a fixed point, at which the weighting of the ego layer almost holds all. We propose a layer-refined GCN model, dubbed LayerGCN, that refines layer representations during information propagation and node updating of GCN. Moreover, previous GCN-based recommendation models aggregate all incoming information from neighbors without distinguishing the noise nodes, which deteriorates the recommendation performance. Our model further prunes the edges of the user-item interaction graph following a degree-sensitive probability instead of the uniform distribution. Experimental results show that the proposed model outperforms the state-of-the-art models significantly on four public datasets with fast training convergence. The implementation code of the proposed method is available at https://github.com/enoche/ImRec.
[ { "created": "Fri, 22 Jul 2022 13:54:59 GMT", "version": "v1" }, { "created": "Fri, 25 Nov 2022 00:57:32 GMT", "version": "v2" } ]
2022-11-28
[ [ "Zhou", "Xin", "" ], [ "Lin", "Donghui", "" ], [ "Liu", "Yong", "" ], [ "Miao", "Chunyan", "" ] ]
Recommendation models utilizing Graph Convolutional Networks (GCNs) have achieved state-of-the-art performance, as they can integrate both the node information and the topological structure of the user-item interaction graph. However, these GCN-based recommendation models not only suffer from over-smoothing when stacking too many layers but also bear performance degeneration resulting from the existence of noise in user-item interactions. In this paper, we first identify a recommendation dilemma of over-smoothing and solution collapsing in current GCN-based models. Specifically, these models usually aggregate all layer embeddings for node updating and achieve their best recommendation performance within a few layers because of over-smoothing. Conversely, if we place learnable weights on layer embeddings for node updating, the weight space will always collapse to a fixed point, at which the weighting of the ego layer almost holds all. We propose a layer-refined GCN model, dubbed LayerGCN, that refines layer representations during information propagation and node updating of GCN. Moreover, previous GCN-based recommendation models aggregate all incoming information from neighbors without distinguishing the noise nodes, which deteriorates the recommendation performance. Our model further prunes the edges of the user-item interaction graph following a degree-sensitive probability instead of the uniform distribution. Experimental results show that the proposed model outperforms the state-of-the-art models significantly on four public datasets with fast training convergence. The implementation code of the proposed method is available at https://github.com/enoche/ImRec.
2002.07444
Vladimir Podolskii
Alexander Kozachinskiy and Vladimir Podolskii
Multiparty Karchmer-Wigderson Games and Threshold Circuits
null
null
null
null
cs.CC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We suggest a generalization of Karchmer-Wigderson communication games to the multiparty setting. Our generalization turns out to be tightly connected to circuits consisting of threshold gates. This allows us to obtain new explicit constructions of such circuits for several functions. In particular, we provide an explicit (polynomial-time computable) log-depth monotone formula for Majority function, consisting only of 3-bit majority gates and variables. This resolves a conjecture of Cohen et al. (CRYPTO 2013).
[ { "created": "Tue, 18 Feb 2020 09:31:00 GMT", "version": "v1" } ]
2020-02-19
[ [ "Kozachinskiy", "Alexander", "" ], [ "Podolskii", "Vladimir", "" ] ]
We suggest a generalization of Karchmer-Wigderson communication games to the multiparty setting. Our generalization turns out to be tightly connected to circuits consisting of threshold gates. This allows us to obtain new explicit constructions of such circuits for several functions. In particular, we provide an explicit (polynomial-time computable) log-depth monotone formula for Majority function, consisting only of 3-bit majority gates and variables. This resolves a conjecture of Cohen et al. (CRYPTO 2013).
2006.05850
Alessandro Epasto
Michele Borassi, Alessandro Epasto, Silvio Lattanzi, Sergei Vassilvitskii, Morteza Zadimoghaddam
Sliding Window Algorithms for k-Clustering Problems
43 pages, 7 figures
In Proceedings of the 34th Conference on Neural Information Processing Systems (NeurIPS 2020)
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The sliding window model of computation captures scenarios in which data is arriving continuously, but only the latest $w$ elements should be used for analysis. The goal is to design algorithms that update the solution efficiently with each arrival rather than recomputing it from scratch. In this work, we focus on $k$-clustering problems such as $k$-means and $k$-median. In this setting, we provide simple and practical algorithms that offer stronger performance guarantees than previous results. Empirically, we show that our methods store only a small fraction of the data, are orders of magnitude faster, and find solutions with costs only slightly higher than those returned by algorithms with access to the full dataset.
[ { "created": "Wed, 10 Jun 2020 14:26:57 GMT", "version": "v1" }, { "created": "Fri, 23 Oct 2020 14:20:27 GMT", "version": "v2" } ]
2020-10-26
[ [ "Borassi", "Michele", "" ], [ "Epasto", "Alessandro", "" ], [ "Lattanzi", "Silvio", "" ], [ "Vassilvitskii", "Sergei", "" ], [ "Zadimoghaddam", "Morteza", "" ] ]
The sliding window model of computation captures scenarios in which data is arriving continuously, but only the latest $w$ elements should be used for analysis. The goal is to design algorithms that update the solution efficiently with each arrival rather than recomputing it from scratch. In this work, we focus on $k$-clustering problems such as $k$-means and $k$-median. In this setting, we provide simple and practical algorithms that offer stronger performance guarantees than previous results. Empirically, we show that our methods store only a small fraction of the data, are orders of magnitude faster, and find solutions with costs only slightly higher than those returned by algorithms with access to the full dataset.
1511.05122
Sara Sabour
Sara Sabour, Yanshuai Cao, Fartash Faghri, David J. Fleet
Adversarial Manipulation of Deep Representations
Accepted as a conference paper at ICLR 2016
null
null
null
cs.CV cs.LG cs.NE
http://creativecommons.org/licenses/by-nc-sa/4.0/
We show that the representation of an image in a deep neural network (DNN) can be manipulated to mimic those of other natural images, with only minor, imperceptible perturbations to the original image. Previous methods for generating adversarial images focused on image perturbations designed to produce erroneous class labels, while we concentrate on the internal layers of DNN representations. In this way our new class of adversarial images differs qualitatively from others. While the adversary is perceptually similar to one image, its internal representation appears remarkably similar to a different image, one from a different class, bearing little if any apparent similarity to the input; they appear generic and consistent with the space of natural images. This phenomenon raises questions about DNN representations, as well as the properties of natural images themselves.
[ { "created": "Mon, 16 Nov 2015 20:48:20 GMT", "version": "v1" }, { "created": "Thu, 19 Nov 2015 21:00:44 GMT", "version": "v2" }, { "created": "Mon, 23 Nov 2015 20:56:44 GMT", "version": "v3" }, { "created": "Fri, 11 Dec 2015 21:03:14 GMT", "version": "v4" }, { "created": "Thu, 7 Jan 2016 20:59:55 GMT", "version": "v5" }, { "created": "Tue, 12 Jan 2016 20:51:51 GMT", "version": "v6" }, { "created": "Wed, 13 Jan 2016 20:57:33 GMT", "version": "v7" }, { "created": "Tue, 1 Mar 2016 20:51:06 GMT", "version": "v8" }, { "created": "Fri, 4 Mar 2016 20:21:24 GMT", "version": "v9" } ]
2016-03-07
[ [ "Sabour", "Sara", "" ], [ "Cao", "Yanshuai", "" ], [ "Faghri", "Fartash", "" ], [ "Fleet", "David J.", "" ] ]
We show that the representation of an image in a deep neural network (DNN) can be manipulated to mimic those of other natural images, with only minor, imperceptible perturbations to the original image. Previous methods for generating adversarial images focused on image perturbations designed to produce erroneous class labels, while we concentrate on the internal layers of DNN representations. In this way our new class of adversarial images differs qualitatively from others. While the adversary is perceptually similar to one image, its internal representation appears remarkably similar to a different image, one from a different class, bearing little if any apparent similarity to the input; they appear generic and consistent with the space of natural images. This phenomenon raises questions about DNN representations, as well as the properties of natural images themselves.
2103.00809
Renshuai Tao
Renshuai Tao, Yanlu Wei, Hainan Li, Aishan Liu, Yifu Ding, Haotong Qin and Xianglong Liu
Over-sampling De-occlusion Attention Network for Prohibited Items Detection in Noisy X-ray Images
13 pages, 7 figures
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
Security inspection is X-ray scanning for personal belongings in suitcases, which is significantly important for the public security but highly time-consuming for human inspectors. Fortunately, deep learning has greatly promoted the development of computer vision, offering a possible way of automatic security inspection. However, items within a luggage are randomly overlapped resulting in noisy X-ray images with heavy occlusions. Thus, traditional CNN-based models trained through common image recognition datasets fail to achieve satisfactory performance in this scenario. To address these problems, we contribute the first high-quality prohibited X-ray object detection dataset named OPIXray, which contains 8885 X-ray images from 5 categories of the widely-occurred prohibited item ``cutters''. The images are gathered from an airport and these prohibited items are annotated manually by professional inspectors, which can be used as a benchmark for model training and further facilitate future research. To better improve occluded X-ray object detection, we further propose an over-sampling de-occlusion attention network (DOAM-O), which consists of a novel de-occlusion attention module and a new over-sampling training strategy. Specifically, our de-occlusion module, namely DOAM, simultaneously leverages the different appearance information of the prohibited items; the over-sampling training strategy forces the model to put more emphasis on these hard samples consisting these items of high occlusion levels, which is more suitable for this scenario. We comprehensively evaluated DOAM-O on the OPIXray dataset, which proves that our model can stably improve the performance of the famous detection models such as SSD, YOLOv3, and FCOS, and outperform many extensively-used attention mechanisms.
[ { "created": "Mon, 1 Mar 2021 07:17:37 GMT", "version": "v1" } ]
2021-03-02
[ [ "Tao", "Renshuai", "" ], [ "Wei", "Yanlu", "" ], [ "Li", "Hainan", "" ], [ "Liu", "Aishan", "" ], [ "Ding", "Yifu", "" ], [ "Qin", "Haotong", "" ], [ "Liu", "Xianglong", "" ] ]
Security inspection is X-ray scanning for personal belongings in suitcases, which is significantly important for the public security but highly time-consuming for human inspectors. Fortunately, deep learning has greatly promoted the development of computer vision, offering a possible way of automatic security inspection. However, items within a luggage are randomly overlapped resulting in noisy X-ray images with heavy occlusions. Thus, traditional CNN-based models trained through common image recognition datasets fail to achieve satisfactory performance in this scenario. To address these problems, we contribute the first high-quality prohibited X-ray object detection dataset named OPIXray, which contains 8885 X-ray images from 5 categories of the widely-occurred prohibited item ``cutters''. The images are gathered from an airport and these prohibited items are annotated manually by professional inspectors, which can be used as a benchmark for model training and further facilitate future research. To better improve occluded X-ray object detection, we further propose an over-sampling de-occlusion attention network (DOAM-O), which consists of a novel de-occlusion attention module and a new over-sampling training strategy. Specifically, our de-occlusion module, namely DOAM, simultaneously leverages the different appearance information of the prohibited items; the over-sampling training strategy forces the model to put more emphasis on these hard samples consisting these items of high occlusion levels, which is more suitable for this scenario. We comprehensively evaluated DOAM-O on the OPIXray dataset, which proves that our model can stably improve the performance of the famous detection models such as SSD, YOLOv3, and FCOS, and outperform many extensively-used attention mechanisms.
1004.4458
William Jackson
P.V.Hunagund and A.B.Kalpana
Crosstalk Noise Modeling for RC and RLC interconnects in Deep Submicron VLSI Circuits
Journal of Computing online at https://sites.google.com/site/journalofcomputing/
Journal of Computing, Volume 2, Issue 4, April 2010
null
null
cs.OH
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The crosstalk noise model for noise constrained interconnects optimization is presented for RC interconnects. The proposed model has simple closed-form expressions, which is capable of predicting the noise amplitude and the noise pulse width of an RC interconnect as well as coupling locations (near-driver and near-receiver) on victim net. This paper also presents a crosstalk noise model for both identical and non identical coupled resistance-inductance-capacitance (RLC) interconnects, which is developed based on a decoupling technique exhibiting an average error of 6.8% as compared to SPICE. The crosstalk noise model, together with a proposed concept of effective mutual inductance, is applied to evaluate the effectiveness of the shielding technique.
[ { "created": "Mon, 26 Apr 2010 10:00:05 GMT", "version": "v1" } ]
2010-04-27
[ [ "Hunagund", "P. V.", "" ], [ "Kalpana", "A. B.", "" ] ]
The crosstalk noise model for noise constrained interconnects optimization is presented for RC interconnects. The proposed model has simple closed-form expressions, which is capable of predicting the noise amplitude and the noise pulse width of an RC interconnect as well as coupling locations (near-driver and near-receiver) on victim net. This paper also presents a crosstalk noise model for both identical and non identical coupled resistance-inductance-capacitance (RLC) interconnects, which is developed based on a decoupling technique exhibiting an average error of 6.8% as compared to SPICE. The crosstalk noise model, together with a proposed concept of effective mutual inductance, is applied to evaluate the effectiveness of the shielding technique.
2012.02334
Yaofeng Desmond Zhong
Yaofeng Desmond Zhong, Biswadip Dey, Amit Chakraborty
Benchmarking Energy-Conserving Neural Networks for Learning Dynamics from Data
null
null
null
null
cs.LG cs.AI cs.SY eess.SY math.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The last few years have witnessed an increased interest in incorporating physics-informed inductive bias in deep learning frameworks. In particular, a growing volume of literature has been exploring ways to enforce energy conservation while using neural networks for learning dynamics from observed time-series data. In this work, we survey ten recently proposed energy-conserving neural network models, including HNN, LNN, DeLaN, SymODEN, CHNN, CLNN and their variants. We provide a compact derivation of the theory behind these models and explain their similarities and differences. Their performance are compared in 4 physical systems. We point out the possibility of leveraging some of these energy-conserving models to design energy-based controllers.
[ { "created": "Thu, 3 Dec 2020 23:53:08 GMT", "version": "v1" }, { "created": "Wed, 30 Dec 2020 18:34:04 GMT", "version": "v2" }, { "created": "Fri, 26 Feb 2021 18:13:21 GMT", "version": "v3" }, { "created": "Tue, 18 May 2021 19:24:55 GMT", "version": "v4" }, { "created": "Tue, 11 Jan 2022 20:14:55 GMT", "version": "v5" }, { "created": "Fri, 28 Apr 2023 21:26:45 GMT", "version": "v6" } ]
2023-05-02
[ [ "Zhong", "Yaofeng Desmond", "" ], [ "Dey", "Biswadip", "" ], [ "Chakraborty", "Amit", "" ] ]
The last few years have witnessed an increased interest in incorporating physics-informed inductive bias in deep learning frameworks. In particular, a growing volume of literature has been exploring ways to enforce energy conservation while using neural networks for learning dynamics from observed time-series data. In this work, we survey ten recently proposed energy-conserving neural network models, including HNN, LNN, DeLaN, SymODEN, CHNN, CLNN and their variants. We provide a compact derivation of the theory behind these models and explain their similarities and differences. Their performance are compared in 4 physical systems. We point out the possibility of leveraging some of these energy-conserving models to design energy-based controllers.
2405.15860
Chak Fong Chong
Chak Fong Chong, Jielong Guo, Xu Yang, Wei Ke, Yapeng Wang
Free Performance Gain from Mixing Multiple Partially Labeled Samples in Multi-label Image Classification
null
null
null
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multi-label image classification datasets are often partially labeled where many labels are missing, posing a significant challenge to training accurate deep classifiers. However, the powerful Mixup sample-mixing data augmentation cannot be well utilized to address this challenge, as it cannot perform linear interpolation on the unknown labels to construct augmented samples. In this paper, we propose LogicMix, a Mixup variant designed for such partially labeled datasets. LogicMix mixes the sample labels by logical OR so that the unknown labels can be correctly mixed by utilizing OR's logical equivalences, including the domination and identity laws. Unlike Mixup, which mixes exactly two samples, LogicMix can mix multiple ($\geq2$) partially labeled samples, constructing visually more confused augmented samples to regularize training. LogicMix is more general and effective than other compared Mixup variants in the experiments on various partially labeled dataset scenarios. Moreover, it is plug-and-play and only requires minimal computation, hence it can be easily inserted into existing frameworks to collaborate with other methods to improve model performance with a negligible impact on training time, as demonstrated through extensive experiments. In particular, through the collaboration of LogicMix, RandAugment, Curriculum Labeling, and Category-wise Fine-Tuning, we attain state-of-the-art performance on MS-COCO, VG-200, and Pascal VOC 2007 benchmarking datasets. The remarkable generality, effectiveness, collaboration, and simplicity suggest that LogicMix promises to be a popular and vital data augmentation method.
[ { "created": "Fri, 24 May 2024 18:05:09 GMT", "version": "v1" } ]
2024-05-28
[ [ "Chong", "Chak Fong", "" ], [ "Guo", "Jielong", "" ], [ "Yang", "Xu", "" ], [ "Ke", "Wei", "" ], [ "Wang", "Yapeng", "" ] ]
Multi-label image classification datasets are often partially labeled where many labels are missing, posing a significant challenge to training accurate deep classifiers. However, the powerful Mixup sample-mixing data augmentation cannot be well utilized to address this challenge, as it cannot perform linear interpolation on the unknown labels to construct augmented samples. In this paper, we propose LogicMix, a Mixup variant designed for such partially labeled datasets. LogicMix mixes the sample labels by logical OR so that the unknown labels can be correctly mixed by utilizing OR's logical equivalences, including the domination and identity laws. Unlike Mixup, which mixes exactly two samples, LogicMix can mix multiple ($\geq2$) partially labeled samples, constructing visually more confused augmented samples to regularize training. LogicMix is more general and effective than other compared Mixup variants in the experiments on various partially labeled dataset scenarios. Moreover, it is plug-and-play and only requires minimal computation, hence it can be easily inserted into existing frameworks to collaborate with other methods to improve model performance with a negligible impact on training time, as demonstrated through extensive experiments. In particular, through the collaboration of LogicMix, RandAugment, Curriculum Labeling, and Category-wise Fine-Tuning, we attain state-of-the-art performance on MS-COCO, VG-200, and Pascal VOC 2007 benchmarking datasets. The remarkable generality, effectiveness, collaboration, and simplicity suggest that LogicMix promises to be a popular and vital data augmentation method.
2104.13048
Zelin Zang
Zelin Zang, Siyuan Li, Di Wu, Jianzhu Guo, Yongjie Xu, Stan Z. Li
Unsupervised Deep Manifold Attributed Graph Embedding
arXiv admin note: text overlap with arXiv:2007.01594 by other authors
null
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by/4.0/
Unsupervised attributed graph representation learning is challenging since both structural and feature information are required to be represented in the latent space. Existing methods concentrate on learning latent representation via reconstruction tasks, but cannot directly optimize representation and are prone to oversmoothing, thus limiting the applications on downstream tasks. To alleviate these issues, we propose a novel graph embedding framework named Deep Manifold Attributed Graph Embedding (DMAGE). A node-to-node geodesic similarity is proposed to compute the inter-node similarity between the data space and the latent space and then use Bergman divergence as loss function to minimize the difference between them. We then design a new network structure with fewer aggregation to alleviate the oversmoothing problem and incorporate graph structure augmentation to improve the representation's stability. Our proposed DMAGE surpasses state-of-the-art methods by a significant margin on three downstream tasks: unsupervised visualization, node clustering, and link prediction across four popular datasets.
[ { "created": "Tue, 27 Apr 2021 08:47:39 GMT", "version": "v1" } ]
2021-04-28
[ [ "Zang", "Zelin", "" ], [ "Li", "Siyuan", "" ], [ "Wu", "Di", "" ], [ "Guo", "Jianzhu", "" ], [ "Xu", "Yongjie", "" ], [ "Li", "Stan Z.", "" ] ]
Unsupervised attributed graph representation learning is challenging since both structural and feature information are required to be represented in the latent space. Existing methods concentrate on learning latent representation via reconstruction tasks, but cannot directly optimize representation and are prone to oversmoothing, thus limiting the applications on downstream tasks. To alleviate these issues, we propose a novel graph embedding framework named Deep Manifold Attributed Graph Embedding (DMAGE). A node-to-node geodesic similarity is proposed to compute the inter-node similarity between the data space and the latent space and then use Bergman divergence as loss function to minimize the difference between them. We then design a new network structure with fewer aggregation to alleviate the oversmoothing problem and incorporate graph structure augmentation to improve the representation's stability. Our proposed DMAGE surpasses state-of-the-art methods by a significant margin on three downstream tasks: unsupervised visualization, node clustering, and link prediction across four popular datasets.
2406.00842
Ori Ernst
Ori Ernst, Ori Shapira, Aviv Slobodkin, Sharon Adar, Mohit Bansal, Jacob Goldberger, Ran Levy, and Ido Dagan
The Power of Summary-Source Alignments
Accepted to ACL-Findings 2024
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multi-document summarization (MDS) is a challenging task, often decomposed to subtasks of salience and redundancy detection, followed by text generation. In this context, alignment of corresponding sentences between a reference summary and its source documents has been leveraged to generate training data for some of the component tasks. Yet, this enabling alignment step has usually been applied heuristically on the sentence level on a limited number of subtasks. In this paper, we propose extending the summary-source alignment framework by (1) applying it at the more fine-grained proposition span level, (2) annotating alignment manually in a multi-document setup, and (3) revealing the great potential of summary-source alignments to yield several datasets for at least six different tasks. Specifically, for each of the tasks, we release a manually annotated test set that was derived automatically from the alignment annotation. We also release development and train sets in the same way, but from automatically derived alignments. Using the datasets, each task is demonstrated with baseline models and corresponding evaluation metrics to spur future research on this broad challenge.
[ { "created": "Sun, 2 Jun 2024 19:35:19 GMT", "version": "v1" } ]
2024-06-04
[ [ "Ernst", "Ori", "" ], [ "Shapira", "Ori", "" ], [ "Slobodkin", "Aviv", "" ], [ "Adar", "Sharon", "" ], [ "Bansal", "Mohit", "" ], [ "Goldberger", "Jacob", "" ], [ "Levy", "Ran", "" ], [ "Dagan", "Ido", "" ] ]
Multi-document summarization (MDS) is a challenging task, often decomposed to subtasks of salience and redundancy detection, followed by text generation. In this context, alignment of corresponding sentences between a reference summary and its source documents has been leveraged to generate training data for some of the component tasks. Yet, this enabling alignment step has usually been applied heuristically on the sentence level on a limited number of subtasks. In this paper, we propose extending the summary-source alignment framework by (1) applying it at the more fine-grained proposition span level, (2) annotating alignment manually in a multi-document setup, and (3) revealing the great potential of summary-source alignments to yield several datasets for at least six different tasks. Specifically, for each of the tasks, we release a manually annotated test set that was derived automatically from the alignment annotation. We also release development and train sets in the same way, but from automatically derived alignments. Using the datasets, each task is demonstrated with baseline models and corresponding evaluation metrics to spur future research on this broad challenge.
1311.5871
Fabien Lauer
Fabien Lauer (LORIA), Henrik Ohlsson
Finding sparse solutions of systems of polynomial equations via group-sparsity optimization
Journal of Global Optimization (2014) to appear
null
null
null
cs.IT cs.LG math.IT math.OC stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The paper deals with the problem of finding sparse solutions to systems of polynomial equations possibly perturbed by noise. In particular, we show how these solutions can be recovered from group-sparse solutions of a derived system of linear equations. Then, two approaches are considered to find these group-sparse solutions. The first one is based on a convex relaxation resulting in a second-order cone programming formulation which can benefit from efficient reweighting techniques for sparsity enhancement. For this approach, sufficient conditions for the exact recovery of the sparsest solution to the polynomial system are derived in the noiseless setting, while stable recovery results are obtained for the noisy case. Though lacking a similar analysis, the second approach provides a more computationally efficient algorithm based on a greedy strategy adding the groups one-by-one. With respect to previous work, the proposed methods recover the sparsest solution in a very short computing time while remaining at least as accurate in terms of the probability of success. This probability is empirically analyzed to emphasize the relationship between the ability of the methods to solve the polynomial system and the sparsity of the solution.
[ { "created": "Fri, 22 Nov 2013 20:29:38 GMT", "version": "v1" }, { "created": "Wed, 16 Jul 2014 15:47:44 GMT", "version": "v2" } ]
2014-07-17
[ [ "Lauer", "Fabien", "", "LORIA" ], [ "Ohlsson", "Henrik", "" ] ]
The paper deals with the problem of finding sparse solutions to systems of polynomial equations possibly perturbed by noise. In particular, we show how these solutions can be recovered from group-sparse solutions of a derived system of linear equations. Then, two approaches are considered to find these group-sparse solutions. The first one is based on a convex relaxation resulting in a second-order cone programming formulation which can benefit from efficient reweighting techniques for sparsity enhancement. For this approach, sufficient conditions for the exact recovery of the sparsest solution to the polynomial system are derived in the noiseless setting, while stable recovery results are obtained for the noisy case. Though lacking a similar analysis, the second approach provides a more computationally efficient algorithm based on a greedy strategy adding the groups one-by-one. With respect to previous work, the proposed methods recover the sparsest solution in a very short computing time while remaining at least as accurate in terms of the probability of success. This probability is empirically analyzed to emphasize the relationship between the ability of the methods to solve the polynomial system and the sparsity of the solution.
0709.3586
Fabrice Rossi
A\"icha El Golli (INRIA Rocquencourt / INRIA Sophia Antipolis), Fabrice Rossi (INRIA Rocquencourt / INRIA Sophia Antipolis), Brieuc Conan-Guez (LITA), Yves Lechevallier (INRIA Rocquencourt / INRIA Sophia Antipolis)
Une adaptation des cartes auto-organisatrices pour des donn\'ees d\'ecrites par un tableau de dissimilarit\'es
null
Revue de Statistique Appliqu\'ee LIV, 3 (2006) 33-64
null
null
cs.NE cs.LG
null
Many data analysis methods cannot be applied to data that are not represented by a fixed number of real values, whereas most of real world observations are not readily available in such a format. Vector based data analysis methods have therefore to be adapted in order to be used with non standard complex data. A flexible and general solution for this adaptation is to use a (dis)similarity measure. Indeed, thanks to expert knowledge on the studied data, it is generally possible to define a measure that can be used to make pairwise comparison between observations. General data analysis methods are then obtained by adapting existing methods to (dis)similarity matrices. In this article, we propose an adaptation of Kohonen's Self Organizing Map (SOM) to (dis)similarity data. The proposed algorithm is an adapted version of the vector based batch SOM. The method is validated on real world data: we provide an analysis of the usage patterns of the web site of the Institut National de Recherche en Informatique et Automatique, constructed thanks to web log mining method.
[ { "created": "Sat, 22 Sep 2007 15:53:54 GMT", "version": "v1" } ]
2007-09-25
[ [ "Golli", "Aïcha El", "", "INRIA Rocquencourt / INRIA Sophia Antipolis" ], [ "Rossi", "Fabrice", "", "INRIA Rocquencourt / INRIA Sophia Antipolis" ], [ "Conan-Guez", "Brieuc", "", "LITA" ], [ "Lechevallier", "Yves", "", "INRIA Rocquencourt / INRIA Sophia\n Antipolis" ] ]
Many data analysis methods cannot be applied to data that are not represented by a fixed number of real values, whereas most of real world observations are not readily available in such a format. Vector based data analysis methods have therefore to be adapted in order to be used with non standard complex data. A flexible and general solution for this adaptation is to use a (dis)similarity measure. Indeed, thanks to expert knowledge on the studied data, it is generally possible to define a measure that can be used to make pairwise comparison between observations. General data analysis methods are then obtained by adapting existing methods to (dis)similarity matrices. In this article, we propose an adaptation of Kohonen's Self Organizing Map (SOM) to (dis)similarity data. The proposed algorithm is an adapted version of the vector based batch SOM. The method is validated on real world data: we provide an analysis of the usage patterns of the web site of the Institut National de Recherche en Informatique et Automatique, constructed thanks to web log mining method.
2304.04094
Omar Maraqa
Omar Maraqa, Saad Al-Ahmadi, Aditya Rajasekaran, Hamza Sokun, Halim Yanikomeroglu, Sadiq M. Sait
Energy-Efficient Optimization of Multi-User NOMA-Assisted Cooperative THz-SIMO MEC Systems
Accepted for publication in IEEE Transactions on Communications
null
10.1109/TCOMM.2023.3265123
null
cs.IT eess.SP math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The various requirements in terms of data rates and latency in beyond 5G and 6G networks have motivated the integration of a variety of communications schemes and technologies to meet these requirements in such networks. Among these schemes are Terahertz (THz) communications, cooperative non-orthogonal multiple-access (NOMA)-enabled schemes, and mobile edge computing (MEC). THz communications offer abundant bandwidth for high-data-rate short-distance applications and NOMA-enabled schemes are promising schemes to realize the target spectral efficiencies and low latency requirements in future networks, while MEC would allow distributed processing and data offloading for the emerging applications in these networks. In this paper, an energy-efficient scheme of multi-user NOMA-assisted cooperative THz single-input multiple-output (SIMO) MEC systems is proposed to allow the uplink transmission of offloaded data from the far cell-edge users to the more computing resources in the base station (BS) through the cell-center users. To reinforce the performance of the proposed scheme, two optimization problems are formulated and solved, namely, the first problem minimizes the total users' energy consumption while the second problem maximizes the total users' computation energy efficiency (CEE) for the proposed scheme. In both problems, the NOMA user pairing, the BS receive beamforming, the transmission time allocation, and the NOMA transmission power allocation coefficients are optimized, while taking into account the full-offloading requirements of each user as well as the predefined latency constraint of the system. The obtained results reveal new insights into the performance and design of multi-user NOMA-assisted cooperative THz-SIMO MEC systems.
[ { "created": "Sat, 8 Apr 2023 20:04:39 GMT", "version": "v1" } ]
2023-04-11
[ [ "Maraqa", "Omar", "" ], [ "Al-Ahmadi", "Saad", "" ], [ "Rajasekaran", "Aditya", "" ], [ "Sokun", "Hamza", "" ], [ "Yanikomeroglu", "Halim", "" ], [ "Sait", "Sadiq M.", "" ] ]
The various requirements in terms of data rates and latency in beyond 5G and 6G networks have motivated the integration of a variety of communications schemes and technologies to meet these requirements in such networks. Among these schemes are Terahertz (THz) communications, cooperative non-orthogonal multiple-access (NOMA)-enabled schemes, and mobile edge computing (MEC). THz communications offer abundant bandwidth for high-data-rate short-distance applications and NOMA-enabled schemes are promising schemes to realize the target spectral efficiencies and low latency requirements in future networks, while MEC would allow distributed processing and data offloading for the emerging applications in these networks. In this paper, an energy-efficient scheme of multi-user NOMA-assisted cooperative THz single-input multiple-output (SIMO) MEC systems is proposed to allow the uplink transmission of offloaded data from the far cell-edge users to the more computing resources in the base station (BS) through the cell-center users. To reinforce the performance of the proposed scheme, two optimization problems are formulated and solved, namely, the first problem minimizes the total users' energy consumption while the second problem maximizes the total users' computation energy efficiency (CEE) for the proposed scheme. In both problems, the NOMA user pairing, the BS receive beamforming, the transmission time allocation, and the NOMA transmission power allocation coefficients are optimized, while taking into account the full-offloading requirements of each user as well as the predefined latency constraint of the system. The obtained results reveal new insights into the performance and design of multi-user NOMA-assisted cooperative THz-SIMO MEC systems.
2211.12923
Kevin Batz
Kevin Batz, Benjamin Lucien Kaminski, Joost-Pieter Katoen, Christoph Matheja, Lena Verscht
A Calculus for Amortized Expected Runtimes
null
null
null
null
cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We develop a weakest-precondition-style calculus \`a la Dijkstra for reasoning about amortized expected runtimes of randomized algorithms with access to dynamic memory - the $\textsf{aert}$ calculus. Our calculus is truly quantitative, i.e. instead of Boolean valued predicates, it manipulates real-valued functions. En route to the $\textsf{aert}$ calculus, we study the $\textsf{ert}$ calculus for reasoning about expected runtimes of Kaminski et al. [2018] extended by capabilities for handling dynamic memory, thus enabling compositional and local reasoning about randomized data structures. This extension employs runtime separation logic, which has been foreshadowed by Matheja [2020] and then implemented in Isabelle/HOL by Haslbeck [2021]. In addition to Haslbeck's results, we further prove soundness of the so-extended $\textsf{ert}$ calculus with respect to an operational Markov decision process model featuring countably-branching nondeterminism, provide intuitive explanations, and provide proof rules enabling separation logic-style verification for upper bounds on expected runtimes. Finally, we build the so-called potential method for amortized analysis into the $\textsf{ert}$ calculus, thus obtaining the $\textsf{aert}$ calculus. Since one needs to be able to handle changes in potential which can be negative, the $\textsf{aert}$ calculus needs to be capable of handling signed random variables. A particularly pleasing feature of our solution is that, unlike e.g. Kozen [1985], we obtain a loop rule for our signed random variables, and furthermore, unlike e.g. Kaminski and Katoen [2017], the $\textsf{aert}$ calculus makes do without the need for involved technical machinery keeping track of the integrability of the random variables. Finally, we present case studies, including a formal analysis of a randomized delete-insert-find-any set data structure [Brodal et al. 1996].
[ { "created": "Wed, 23 Nov 2022 12:55:14 GMT", "version": "v1" } ]
2022-11-24
[ [ "Batz", "Kevin", "" ], [ "Kaminski", "Benjamin Lucien", "" ], [ "Katoen", "Joost-Pieter", "" ], [ "Matheja", "Christoph", "" ], [ "Verscht", "Lena", "" ] ]
We develop a weakest-precondition-style calculus \`a la Dijkstra for reasoning about amortized expected runtimes of randomized algorithms with access to dynamic memory - the $\textsf{aert}$ calculus. Our calculus is truly quantitative, i.e. instead of Boolean valued predicates, it manipulates real-valued functions. En route to the $\textsf{aert}$ calculus, we study the $\textsf{ert}$ calculus for reasoning about expected runtimes of Kaminski et al. [2018] extended by capabilities for handling dynamic memory, thus enabling compositional and local reasoning about randomized data structures. This extension employs runtime separation logic, which has been foreshadowed by Matheja [2020] and then implemented in Isabelle/HOL by Haslbeck [2021]. In addition to Haslbeck's results, we further prove soundness of the so-extended $\textsf{ert}$ calculus with respect to an operational Markov decision process model featuring countably-branching nondeterminism, provide intuitive explanations, and provide proof rules enabling separation logic-style verification for upper bounds on expected runtimes. Finally, we build the so-called potential method for amortized analysis into the $\textsf{ert}$ calculus, thus obtaining the $\textsf{aert}$ calculus. Since one needs to be able to handle changes in potential which can be negative, the $\textsf{aert}$ calculus needs to be capable of handling signed random variables. A particularly pleasing feature of our solution is that, unlike e.g. Kozen [1985], we obtain a loop rule for our signed random variables, and furthermore, unlike e.g. Kaminski and Katoen [2017], the $\textsf{aert}$ calculus makes do without the need for involved technical machinery keeping track of the integrability of the random variables. Finally, we present case studies, including a formal analysis of a randomized delete-insert-find-any set data structure [Brodal et al. 1996].
2109.06896
Chao-Chun Hsu
Chao-Chun Hsu and Chenhao Tan
Decision-Focused Summarization
16 pages, 10 figures, EMNLP 2021, code is available at https://github.com/ChicagoHAI/decsum
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
Relevance in summarization is typically defined based on textual information alone, without incorporating insights about a particular decision. As a result, to support risk analysis of pancreatic cancer, summaries of medical notes may include irrelevant information such as a knee injury. We propose a novel problem, decision-focused summarization, where the goal is to summarize relevant information for a decision. We leverage a predictive model that makes the decision based on the full text to provide valuable insights on how a decision can be inferred from text. To build a summary, we then select representative sentences that lead to similar model decisions as using the full text while accounting for textual non-redundancy. To evaluate our method (DecSum), we build a testbed where the task is to summarize the first ten reviews of a restaurant in support of predicting its future rating on Yelp. DecSum substantially outperforms text-only summarization methods and model-based explanation methods in decision faithfulness and representativeness. We further demonstrate that DecSum is the only method that enables humans to outperform random chance in predicting which restaurant will be better rated in the future.
[ { "created": "Tue, 14 Sep 2021 18:00:14 GMT", "version": "v1" } ]
2021-09-16
[ [ "Hsu", "Chao-Chun", "" ], [ "Tan", "Chenhao", "" ] ]
Relevance in summarization is typically defined based on textual information alone, without incorporating insights about a particular decision. As a result, to support risk analysis of pancreatic cancer, summaries of medical notes may include irrelevant information such as a knee injury. We propose a novel problem, decision-focused summarization, where the goal is to summarize relevant information for a decision. We leverage a predictive model that makes the decision based on the full text to provide valuable insights on how a decision can be inferred from text. To build a summary, we then select representative sentences that lead to similar model decisions as using the full text while accounting for textual non-redundancy. To evaluate our method (DecSum), we build a testbed where the task is to summarize the first ten reviews of a restaurant in support of predicting its future rating on Yelp. DecSum substantially outperforms text-only summarization methods and model-based explanation methods in decision faithfulness and representativeness. We further demonstrate that DecSum is the only method that enables humans to outperform random chance in predicting which restaurant will be better rated in the future.
2405.11437
Fadila Douamba Wendigoundi
Fadila Wendigoundi Douamba, Jianjun Song, Ling Fu, Yuliang Liu and Xiang Bai
The First Swahili Language Scene Text Detection and Recognition Dataset
Accepted to ICDAR 2024
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
Scene text recognition is essential in many applications, including automated translation, information retrieval, driving assistance, and enhancing accessibility for individuals with visual impairments. Much research has been done to improve the accuracy and performance of scene text detection and recognition models. However, most of this research has been conducted in the most common languages, English and Chinese. There is a significant gap in low-resource languages, especially the Swahili Language. Swahili is widely spoken in East African countries but is still an under-explored language in scene text recognition. No studies have been focused explicitly on Swahili natural scene text detection and recognition, and no dataset for Swahili language scene text detection and recognition is publicly available. We propose a comprehensive dataset of Swahili scene text images and evaluate the dataset on different scene text detection and recognition models. The dataset contains 976 images collected in different places and under various circumstances. Each image has its annotation at the word level. The proposed dataset can also serve as a benchmark dataset specific to the Swahili language for evaluating and comparing different approaches and fostering future research endeavors. The dataset is available on GitHub via this link: https://github.com/FadilaW/Swahili-STR-Dataset
[ { "created": "Sun, 19 May 2024 03:55:02 GMT", "version": "v1" } ]
2024-05-21
[ [ "Douamba", "Fadila Wendigoundi", "" ], [ "Song", "Jianjun", "" ], [ "Fu", "Ling", "" ], [ "Liu", "Yuliang", "" ], [ "Bai", "Xiang", "" ] ]
Scene text recognition is essential in many applications, including automated translation, information retrieval, driving assistance, and enhancing accessibility for individuals with visual impairments. Much research has been done to improve the accuracy and performance of scene text detection and recognition models. However, most of this research has been conducted in the most common languages, English and Chinese. There is a significant gap in low-resource languages, especially the Swahili Language. Swahili is widely spoken in East African countries but is still an under-explored language in scene text recognition. No studies have been focused explicitly on Swahili natural scene text detection and recognition, and no dataset for Swahili language scene text detection and recognition is publicly available. We propose a comprehensive dataset of Swahili scene text images and evaluate the dataset on different scene text detection and recognition models. The dataset contains 976 images collected in different places and under various circumstances. Each image has its annotation at the word level. The proposed dataset can also serve as a benchmark dataset specific to the Swahili language for evaluating and comparing different approaches and fostering future research endeavors. The dataset is available on GitHub via this link: https://github.com/FadilaW/Swahili-STR-Dataset
2302.05703
Piush Aggarwal
Piush Aggarwal, Pranit Chawla, Mithun Das, Punyajoy Saha, Binny Mathew, Torsten Zesch, Animesh Mukherjee
HateProof: Are Hateful Meme Detection Systems really Robust?
Accepted at TheWebConf'2023 (WWW'2023)
null
10.1145/3543507.3583356
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Exploiting social media to spread hate has tremendously increased over the years. Lately, multi-modal hateful content such as memes has drawn relatively more traction than uni-modal content. Moreover, the availability of implicit content payloads makes them fairly challenging to be detected by existing hateful meme detection systems. In this paper, we present a use case study to analyze such systems' vulnerabilities against external adversarial attacks. We find that even very simple perturbations in uni-modal and multi-modal settings performed by humans with little knowledge about the model can make the existing detection models highly vulnerable. Empirically, we find a noticeable performance drop of as high as 10% in the macro-F1 score for certain attacks. As a remedy, we attempt to boost the model's robustness using contrastive learning as well as an adversarial training-based method - VILLA. Using an ensemble of the above two approaches, in two of our high resolution datasets, we are able to (re)gain back the performance to a large extent for certain attacks. We believe that ours is a first step toward addressing this crucial problem in an adversarial setting and would inspire more such investigations in the future.
[ { "created": "Sat, 11 Feb 2023 14:36:11 GMT", "version": "v1" } ]
2023-02-14
[ [ "Aggarwal", "Piush", "" ], [ "Chawla", "Pranit", "" ], [ "Das", "Mithun", "" ], [ "Saha", "Punyajoy", "" ], [ "Mathew", "Binny", "" ], [ "Zesch", "Torsten", "" ], [ "Mukherjee", "Animesh", "" ] ]
Exploiting social media to spread hate has tremendously increased over the years. Lately, multi-modal hateful content such as memes has drawn relatively more traction than uni-modal content. Moreover, the availability of implicit content payloads makes them fairly challenging to be detected by existing hateful meme detection systems. In this paper, we present a use case study to analyze such systems' vulnerabilities against external adversarial attacks. We find that even very simple perturbations in uni-modal and multi-modal settings performed by humans with little knowledge about the model can make the existing detection models highly vulnerable. Empirically, we find a noticeable performance drop of as high as 10% in the macro-F1 score for certain attacks. As a remedy, we attempt to boost the model's robustness using contrastive learning as well as an adversarial training-based method - VILLA. Using an ensemble of the above two approaches, in two of our high resolution datasets, we are able to (re)gain back the performance to a large extent for certain attacks. We believe that ours is a first step toward addressing this crucial problem in an adversarial setting and would inspire more such investigations in the future.
2005.01923
Muhammad Ali Farooq
Muhammad Ali Farooq and Peter Corcoran
Generating Thermal Image Data Samples using 3D Facial Modelling Techniques and Deep Learning Methodologies
Paper accpeted in QOMEX IEEE 2020 Conference copyright submitted to IEEE
null
10.1109/QoMEX48832.2020.9123079
null
cs.CV cs.LG eess.IV stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Methods for generating synthetic data have become of increasing importance to build large datasets required for Convolution Neural Networks (CNN) based deep learning techniques for a wide range of computer vision applications. In this work, we extend existing methodologies to show how 2D thermal facial data can be mapped to provide 3D facial models. For the proposed research work we have used tufts datasets for generating 3D varying face poses by using a single frontal face pose. The system works by refining the existing image quality by performing fusion based image preprocessing operations. The refined outputs have better contrast adjustments, decreased noise level and higher exposedness of the dark regions. It makes the facial landmarks and temperature patterns on the human face more discernible and visible when compared to original raw data. Different image quality metrics are used to compare the refined version of images with original images. In the next phase of the proposed study, the refined version of images is used to create 3D facial geometry structures by using Convolution Neural Networks (CNN). The generated outputs are then imported in blender software to finally extract the 3D thermal facial outputs of both males and females. The same technique is also used on our thermal face data acquired using prototype thermal camera (developed under Heliaus EU project) in an indoor lab environment which is then used for generating synthetic 3D face data along with varying yaw face angles and lastly facial depth map is generated.
[ { "created": "Tue, 5 May 2020 02:55:14 GMT", "version": "v1" }, { "created": "Thu, 7 May 2020 11:02:04 GMT", "version": "v2" } ]
2020-09-02
[ [ "Farooq", "Muhammad Ali", "" ], [ "Corcoran", "Peter", "" ] ]
Methods for generating synthetic data have become of increasing importance to build large datasets required for Convolution Neural Networks (CNN) based deep learning techniques for a wide range of computer vision applications. In this work, we extend existing methodologies to show how 2D thermal facial data can be mapped to provide 3D facial models. For the proposed research work we have used tufts datasets for generating 3D varying face poses by using a single frontal face pose. The system works by refining the existing image quality by performing fusion based image preprocessing operations. The refined outputs have better contrast adjustments, decreased noise level and higher exposedness of the dark regions. It makes the facial landmarks and temperature patterns on the human face more discernible and visible when compared to original raw data. Different image quality metrics are used to compare the refined version of images with original images. In the next phase of the proposed study, the refined version of images is used to create 3D facial geometry structures by using Convolution Neural Networks (CNN). The generated outputs are then imported in blender software to finally extract the 3D thermal facial outputs of both males and females. The same technique is also used on our thermal face data acquired using prototype thermal camera (developed under Heliaus EU project) in an indoor lab environment which is then used for generating synthetic 3D face data along with varying yaw face angles and lastly facial depth map is generated.
2010.10852
Huy Quoc To
Huy Quoc To, Kiet Van Nguyen, Ngan Luu-Thuy Nguyen, Anh Gia-Tuan Nguyen
Gender Prediction Based on Vietnamese Names with Machine Learning Techniques
6 pages, 6 figures. NLPIR 2020: 4th International Conference on Natural Language Processing and Information Retrieval
null
10.1145/3443279.3443309
null
cs.CL
http://creativecommons.org/licenses/by-nc-sa/4.0/
As biological gender is one of the aspects of presenting individual human, much work has been done on gender classification based on people names. The proposals for English and Chinese languages are tremendous; still, there have been few works done for Vietnamese so far. We propose a new dataset for gender prediction based on Vietnamese names. This dataset comprises over 26,000 full names annotated with genders. This dataset is available on our website for research purposes. In addition, this paper describes six machine learning algorithms (Support Vector Machine, Multinomial Naive Bayes, Bernoulli Naive Bayes, Decision Tree, Random Forrest and Logistic Regression) and a deep learning model (LSTM) with fastText word embedding for gender prediction on Vietnamese names. We create a dataset and investigate the impact of each name component on detecting gender. As a result, the best F1-score that we have achieved is up to 96% on LSTM model and we generate a web API based on our trained model.
[ { "created": "Wed, 21 Oct 2020 09:25:48 GMT", "version": "v1" }, { "created": "Thu, 22 Oct 2020 02:21:32 GMT", "version": "v2" }, { "created": "Tue, 27 Oct 2020 01:29:35 GMT", "version": "v3" }, { "created": "Tue, 23 Mar 2021 07:25:00 GMT", "version": "v4" } ]
2021-03-24
[ [ "To", "Huy Quoc", "" ], [ "Van Nguyen", "Kiet", "" ], [ "Nguyen", "Ngan Luu-Thuy", "" ], [ "Nguyen", "Anh Gia-Tuan", "" ] ]
As biological gender is one of the aspects of presenting individual human, much work has been done on gender classification based on people names. The proposals for English and Chinese languages are tremendous; still, there have been few works done for Vietnamese so far. We propose a new dataset for gender prediction based on Vietnamese names. This dataset comprises over 26,000 full names annotated with genders. This dataset is available on our website for research purposes. In addition, this paper describes six machine learning algorithms (Support Vector Machine, Multinomial Naive Bayes, Bernoulli Naive Bayes, Decision Tree, Random Forrest and Logistic Regression) and a deep learning model (LSTM) with fastText word embedding for gender prediction on Vietnamese names. We create a dataset and investigate the impact of each name component on detecting gender. As a result, the best F1-score that we have achieved is up to 96% on LSTM model and we generate a web API based on our trained model.
2405.14664
Oscar Davis
Oscar Davis, Samuel Kessler, Mircea Petrache, \.Ismail \.Ilkan Ceylan, Michael Bronstein, Avishek Joey Bose
Fisher Flow Matching for Generative Modeling over Discrete Data
Preprint, Under Review
null
null
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Generative modeling over discrete data has recently seen numerous success stories, with applications spanning language modeling, biological sequence design, and graph-structured molecular data. The predominant generative modeling paradigm for discrete data is still autoregressive, with more recent alternatives based on diffusion or flow-matching falling short of their impressive performance in continuous data settings, such as image or video generation. In this work, we introduce Fisher-Flow, a novel flow-matching model for discrete data. Fisher-Flow takes a manifestly geometric perspective by considering categorical distributions over discrete data as points residing on a statistical manifold equipped with its natural Riemannian metric: the $\textit{Fisher-Rao metric}$. As a result, we demonstrate discrete data itself can be continuously reparameterised to points on the positive orthant of the $d$-hypersphere $\mathbb{S}^d_+$, which allows us to define flows that map any source distribution to target in a principled manner by transporting mass along (closed-form) geodesics of $\mathbb{S}^d_+$. Furthermore, the learned flows in Fisher-Flow can be further bootstrapped by leveraging Riemannian optimal transport leading to improved training dynamics. We prove that the gradient flow induced by Fisher-Flow is optimal in reducing the forward KL divergence. We evaluate Fisher-Flow on an array of synthetic and diverse real-world benchmarks, including designing DNA Promoter, and DNA Enhancer sequences. Empirically, we find that Fisher-Flow improves over prior diffusion and flow-matching models on these benchmarks.
[ { "created": "Thu, 23 May 2024 15:02:11 GMT", "version": "v1" }, { "created": "Fri, 24 May 2024 20:21:17 GMT", "version": "v2" }, { "created": "Tue, 28 May 2024 20:18:16 GMT", "version": "v3" } ]
2024-05-30
[ [ "Davis", "Oscar", "" ], [ "Kessler", "Samuel", "" ], [ "Petrache", "Mircea", "" ], [ "Ceylan", "İsmail İlkan", "" ], [ "Bronstein", "Michael", "" ], [ "Bose", "Avishek Joey", "" ] ]
Generative modeling over discrete data has recently seen numerous success stories, with applications spanning language modeling, biological sequence design, and graph-structured molecular data. The predominant generative modeling paradigm for discrete data is still autoregressive, with more recent alternatives based on diffusion or flow-matching falling short of their impressive performance in continuous data settings, such as image or video generation. In this work, we introduce Fisher-Flow, a novel flow-matching model for discrete data. Fisher-Flow takes a manifestly geometric perspective by considering categorical distributions over discrete data as points residing on a statistical manifold equipped with its natural Riemannian metric: the $\textit{Fisher-Rao metric}$. As a result, we demonstrate discrete data itself can be continuously reparameterised to points on the positive orthant of the $d$-hypersphere $\mathbb{S}^d_+$, which allows us to define flows that map any source distribution to target in a principled manner by transporting mass along (closed-form) geodesics of $\mathbb{S}^d_+$. Furthermore, the learned flows in Fisher-Flow can be further bootstrapped by leveraging Riemannian optimal transport leading to improved training dynamics. We prove that the gradient flow induced by Fisher-Flow is optimal in reducing the forward KL divergence. We evaluate Fisher-Flow on an array of synthetic and diverse real-world benchmarks, including designing DNA Promoter, and DNA Enhancer sequences. Empirically, we find that Fisher-Flow improves over prior diffusion and flow-matching models on these benchmarks.
cs/0405107
Carlos Ches\~nevar
Carlos Iv\'an Ches\~nevar and Guillermo Ricardo Simari
A Framework for Combining Defeasible Argumentation with Labeled Deduction
15 pages, presented at CMSRA Workshop 2003. Buenos Aires, Argentina
In "Computer Modeling of Scientific Reasoning" (C.Delrieux, J.Legris, Eds.). Pp. 43-56, Ed. Ediuns, Argentina, 2003. ISBN 987-89281-89-6
null
null
cs.AI cs.SC
null
In the last years, there has been an increasing demand of a variety of logical systems, prompted mostly by applications of logic in AI and other related areas. Labeled Deductive Systems (LDS) were developed as a flexible methodology to formalize such a kind of complex logical systems. Defeasible argumentation has proven to be a successful approach to formalizing commonsense reasoning, encompassing many other alternative formalisms for defeasible reasoning. Argument-based frameworks share some common notions (such as the concept of argument, defeater, etc.) along with a number of particular features which make it difficult to compare them with each other from a logical viewpoint. This paper introduces LDSar, a LDS for defeasible argumentation in which many important issues concerning defeasible argumentation are captured within a unified logical framework. We also discuss some logical properties and extensions that emerge from the proposed framework.
[ { "created": "Thu, 27 May 2004 18:54:31 GMT", "version": "v1" } ]
2007-05-23
[ [ "Chesñevar", "Carlos Iván", "" ], [ "Simari", "Guillermo Ricardo", "" ] ]
In the last years, there has been an increasing demand of a variety of logical systems, prompted mostly by applications of logic in AI and other related areas. Labeled Deductive Systems (LDS) were developed as a flexible methodology to formalize such a kind of complex logical systems. Defeasible argumentation has proven to be a successful approach to formalizing commonsense reasoning, encompassing many other alternative formalisms for defeasible reasoning. Argument-based frameworks share some common notions (such as the concept of argument, defeater, etc.) along with a number of particular features which make it difficult to compare them with each other from a logical viewpoint. This paper introduces LDSar, a LDS for defeasible argumentation in which many important issues concerning defeasible argumentation are captured within a unified logical framework. We also discuss some logical properties and extensions that emerge from the proposed framework.
2003.02645
Micha Livne
Micha Livne, Kevin Swersky, David J. Fleet
SentenceMIM: A Latent Variable Language Model
Preprint. Demo: https://github.com/seraphlabs-ca/SentenceMIM-demo
null
null
null
cs.CL cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
SentenceMIM is a probabilistic auto-encoder for language data, trained with Mutual Information Machine (MIM) learning to provide a fixed length representation of variable length language observations (i.e., similar to VAE). Previous attempts to learn VAEs for language data faced challenges due to posterior collapse. MIM learning encourages high mutual information between observations and latent variables, and is robust against posterior collapse. As such, it learns informative representations whose dimension can be an order of magnitude higher than existing language VAEs. Importantly, the SentenceMIM loss has no hyper-parameters, simplifying optimization. We compare sentenceMIM with VAE, and AE on multiple datasets. SentenceMIM yields excellent reconstruction, comparable to AEs, with a rich structured latent space, comparable to VAEs. The structured latent representation is demonstrated with interpolation between sentences of different lengths. We demonstrate the versatility of sentenceMIM by utilizing a trained model for question-answering and transfer learning, without fine-tuning, outperforming VAE and AE with similar architectures.
[ { "created": "Tue, 18 Feb 2020 15:34:29 GMT", "version": "v1" }, { "created": "Fri, 6 Mar 2020 02:41:29 GMT", "version": "v2" }, { "created": "Mon, 8 Feb 2021 15:20:13 GMT", "version": "v3" }, { "created": "Sun, 14 Feb 2021 11:24:11 GMT", "version": "v4" }, { "created": "Wed, 21 Apr 2021 20:02:00 GMT", "version": "v5" } ]
2021-04-23
[ [ "Livne", "Micha", "" ], [ "Swersky", "Kevin", "" ], [ "Fleet", "David J.", "" ] ]
SentenceMIM is a probabilistic auto-encoder for language data, trained with Mutual Information Machine (MIM) learning to provide a fixed length representation of variable length language observations (i.e., similar to VAE). Previous attempts to learn VAEs for language data faced challenges due to posterior collapse. MIM learning encourages high mutual information between observations and latent variables, and is robust against posterior collapse. As such, it learns informative representations whose dimension can be an order of magnitude higher than existing language VAEs. Importantly, the SentenceMIM loss has no hyper-parameters, simplifying optimization. We compare sentenceMIM with VAE, and AE on multiple datasets. SentenceMIM yields excellent reconstruction, comparable to AEs, with a rich structured latent space, comparable to VAEs. The structured latent representation is demonstrated with interpolation between sentences of different lengths. We demonstrate the versatility of sentenceMIM by utilizing a trained model for question-answering and transfer learning, without fine-tuning, outperforming VAE and AE with similar architectures.
1608.05177
Youbao Tang
Youbao Tang, Xiangqian Wu, and Wei Bu
Deeply-Supervised Recurrent Convolutional Neural Network for Saliency Detection
5 pages, 5 figures, accepted by ACMMM 2016
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper proposes a novel saliency detection method by developing a deeply-supervised recurrent convolutional neural network (DSRCNN), which performs a full image-to-image saliency prediction. For saliency detection, the local, global, and contextual information of salient objects is important to obtain a high quality salient map. To achieve this goal, the DSRCNN is designed based on VGGNet-16. Firstly, the recurrent connections are incorporated into each convolutional layer, which can make the model more powerful for learning the contextual information. Secondly, side-output layers are added to conduct the deeply-supervised operation, which can make the model learn more discriminative and robust features by effecting the intermediate layers. Finally, all of the side-outputs are fused to integrate the local and global information to get the final saliency detection results. Therefore, the DSRCNN combines the advantages of recurrent convolutional neural networks and deeply-supervised nets. The DSRCNN model is tested on five benchmark datasets, and experimental results demonstrate that the proposed method significantly outperforms the state-of-the-art saliency detection approaches on all test datasets.
[ { "created": "Thu, 18 Aug 2016 05:08:16 GMT", "version": "v1" } ]
2016-08-19
[ [ "Tang", "Youbao", "" ], [ "Wu", "Xiangqian", "" ], [ "Bu", "Wei", "" ] ]
This paper proposes a novel saliency detection method by developing a deeply-supervised recurrent convolutional neural network (DSRCNN), which performs a full image-to-image saliency prediction. For saliency detection, the local, global, and contextual information of salient objects is important to obtain a high quality salient map. To achieve this goal, the DSRCNN is designed based on VGGNet-16. Firstly, the recurrent connections are incorporated into each convolutional layer, which can make the model more powerful for learning the contextual information. Secondly, side-output layers are added to conduct the deeply-supervised operation, which can make the model learn more discriminative and robust features by effecting the intermediate layers. Finally, all of the side-outputs are fused to integrate the local and global information to get the final saliency detection results. Therefore, the DSRCNN combines the advantages of recurrent convolutional neural networks and deeply-supervised nets. The DSRCNN model is tested on five benchmark datasets, and experimental results demonstrate that the proposed method significantly outperforms the state-of-the-art saliency detection approaches on all test datasets.
1407.4903
Yutao Ma
Zhi Wang, Bing Li, Yutao Ma
An Analysis of Research in Software Engineering: Assessment and Trends
25 pages, 10 figures, 3 tables
null
null
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Glass published the first report on the assessment of systems and software engineering scholars and institutions two decades ago. The ongoing, annual survey of publications in this field provides fund managers, young scholars, graduate students, etc. with useful information for different purposes. However, the studies have been questioned by some critics because of a few shortcomings of the evaluation method. It is actually very hard to reach a widely recognized consensus on such an assessment of scholars and institutions. This paper presents a module and automated method for assessment and trends analysis in software engineering compared with the prior studies. To achieve a more reasonable evaluation result, we take into consideration more high-quality publications, the rank of each publication analyzed, and the different roles of authors named on each paper in question. According to the 7638 papers published in 36 publications from 2008 to 2013, the statistics of research subjects roughly follow power laws, implying the interesting Matthew Effect. We then identify the Top 20 scholars, institutions and countries or regions in terms of a new evaluation rule based on the frequently-used one. The top-ranked scholar is Mark Harman of the University College London, UK, the top-ranked institution is the University of California, USA, and the top-ranked country is the USA. Besides, we also show two levels of trend changes based on the EI classification system and user-defined uncontrolled keywords, as well as noteworthy scholars and institutions in a specific research area. We believe that our results would provide a valuable insight for young scholars and graduate students to seek possible potential collaborators and grasp the popular research topics in software engineering.
[ { "created": "Fri, 18 Jul 2014 07:50:52 GMT", "version": "v1" } ]
2014-07-21
[ [ "Wang", "Zhi", "" ], [ "Li", "Bing", "" ], [ "Ma", "Yutao", "" ] ]
Glass published the first report on the assessment of systems and software engineering scholars and institutions two decades ago. The ongoing, annual survey of publications in this field provides fund managers, young scholars, graduate students, etc. with useful information for different purposes. However, the studies have been questioned by some critics because of a few shortcomings of the evaluation method. It is actually very hard to reach a widely recognized consensus on such an assessment of scholars and institutions. This paper presents a module and automated method for assessment and trends analysis in software engineering compared with the prior studies. To achieve a more reasonable evaluation result, we take into consideration more high-quality publications, the rank of each publication analyzed, and the different roles of authors named on each paper in question. According to the 7638 papers published in 36 publications from 2008 to 2013, the statistics of research subjects roughly follow power laws, implying the interesting Matthew Effect. We then identify the Top 20 scholars, institutions and countries or regions in terms of a new evaluation rule based on the frequently-used one. The top-ranked scholar is Mark Harman of the University College London, UK, the top-ranked institution is the University of California, USA, and the top-ranked country is the USA. Besides, we also show two levels of trend changes based on the EI classification system and user-defined uncontrolled keywords, as well as noteworthy scholars and institutions in a specific research area. We believe that our results would provide a valuable insight for young scholars and graduate students to seek possible potential collaborators and grasp the popular research topics in software engineering.
2006.03857
Yu Yang
Yu Yang, Zhiyuan Wen, Jiannong Cao, Jiaxing Shen, Hongzhi Yin and Xiaofang Zhou
EPARS: Early Prediction of At-risk Students with Online and Offline Learning Behaviors
To be published in DASFAA 2020
null
null
null
cs.AI cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Early prediction of students at risk (STAR) is an effective and significant means to provide timely intervention for dropout and suicide. Existing works mostly rely on either online or offline learning behaviors which are not comprehensive enough to capture the whole learning processes and lead to unsatisfying prediction performance. We propose a novel algorithm (EPARS) that could early predict STAR in a semester by modeling online and offline learning behaviors. The online behaviors come from the log of activities when students use the online learning management system. The offline behaviors derive from the check-in records of the library. Our main observations are two folds. Significantly different from good students, STAR barely have regular and clear study routines. We devised a multi-scale bag-of-regularity method to extract the regularity of learning behaviors that is robust to sparse data. Second, friends of STAR are more likely to be at risk. We constructed a co-occurrence network to approximate the underlying social network and encode the social homophily as features through network embedding. To validate the proposed algorithm, extensive experiments have been conducted among an Asian university with 15,503 undergraduate students. The results indicate EPARS outperforms baselines by 14.62% ~ 38.22% in predicting STAR.
[ { "created": "Sat, 6 Jun 2020 12:56:26 GMT", "version": "v1" } ]
2020-06-09
[ [ "Yang", "Yu", "" ], [ "Wen", "Zhiyuan", "" ], [ "Cao", "Jiannong", "" ], [ "Shen", "Jiaxing", "" ], [ "Yin", "Hongzhi", "" ], [ "Zhou", "Xiaofang", "" ] ]
Early prediction of students at risk (STAR) is an effective and significant means to provide timely intervention for dropout and suicide. Existing works mostly rely on either online or offline learning behaviors which are not comprehensive enough to capture the whole learning processes and lead to unsatisfying prediction performance. We propose a novel algorithm (EPARS) that could early predict STAR in a semester by modeling online and offline learning behaviors. The online behaviors come from the log of activities when students use the online learning management system. The offline behaviors derive from the check-in records of the library. Our main observations are two folds. Significantly different from good students, STAR barely have regular and clear study routines. We devised a multi-scale bag-of-regularity method to extract the regularity of learning behaviors that is robust to sparse data. Second, friends of STAR are more likely to be at risk. We constructed a co-occurrence network to approximate the underlying social network and encode the social homophily as features through network embedding. To validate the proposed algorithm, extensive experiments have been conducted among an Asian university with 15,503 undergraduate students. The results indicate EPARS outperforms baselines by 14.62% ~ 38.22% in predicting STAR.
2405.12197
Banafsheh Saber Latibari
Banafsheh Saber Latibari, Sujan Ghimire, Muhtasim Alam Chowdhury, Najmeh Nazari, Kevin Immanuel Gubbi, Houman Homayoun, Avesta Sasan, Soheil Salehi
Automated Hardware Logic Obfuscation Framework Using GPT
null
null
null
null
cs.CR
http://creativecommons.org/licenses/by/4.0/
Obfuscation stands as a promising solution for safeguarding hardware intellectual property (IP) against a spectrum of threats including reverse engineering, IP piracy, and tampering. In this paper, we introduce Obfus-chat, a novel framework leveraging Generative Pre-trained Transformer (GPT) models to automate the obfuscation process. The proposed framework accepts hardware design netlists and key sizes as inputs, and autonomously generates obfuscated code tailored to enhance security. To evaluate the effectiveness of our approach, we employ the Trust-Hub Obfuscation Benchmark for comparative analysis. We employed SAT attacks to assess the security of the design, along with functional verification procedures to ensure that the obfuscated design remains consistent with the original. Our results demonstrate the efficacy and efficiency of the proposed framework in fortifying hardware IP against potential threats, thus providing a valuable contribution to the field of hardware security.
[ { "created": "Mon, 20 May 2024 17:33:00 GMT", "version": "v1" } ]
2024-05-21
[ [ "Latibari", "Banafsheh Saber", "" ], [ "Ghimire", "Sujan", "" ], [ "Chowdhury", "Muhtasim Alam", "" ], [ "Nazari", "Najmeh", "" ], [ "Gubbi", "Kevin Immanuel", "" ], [ "Homayoun", "Houman", "" ], [ "Sasan", "Avesta", "" ], [ "Salehi", "Soheil", "" ] ]
Obfuscation stands as a promising solution for safeguarding hardware intellectual property (IP) against a spectrum of threats including reverse engineering, IP piracy, and tampering. In this paper, we introduce Obfus-chat, a novel framework leveraging Generative Pre-trained Transformer (GPT) models to automate the obfuscation process. The proposed framework accepts hardware design netlists and key sizes as inputs, and autonomously generates obfuscated code tailored to enhance security. To evaluate the effectiveness of our approach, we employ the Trust-Hub Obfuscation Benchmark for comparative analysis. We employed SAT attacks to assess the security of the design, along with functional verification procedures to ensure that the obfuscated design remains consistent with the original. Our results demonstrate the efficacy and efficiency of the proposed framework in fortifying hardware IP against potential threats, thus providing a valuable contribution to the field of hardware security.
1311.6839
Marcus Schaefer
Marcus Schaefer
Picking Planar Edges; or, Drawing a Graph with a Planar Subgraph
null
null
null
null
cs.CG cs.DM math.CO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Given a graph $G$ and a subset $F \subseteq E(G)$ of its edges, is there a drawing of $G$ in which all edges of $F$ are free of crossings? We show that this question can be solved in polynomial time using a Hanani-Tutte style approach. If we require the drawing of $G$ to be straight-line, and allow at most one crossing along each edge in $F$, the problem turns out to be as hard as the existential theory of the real numbers.
[ { "created": "Tue, 26 Nov 2013 22:57:53 GMT", "version": "v1" } ]
2013-11-29
[ [ "Schaefer", "Marcus", "" ] ]
Given a graph $G$ and a subset $F \subseteq E(G)$ of its edges, is there a drawing of $G$ in which all edges of $F$ are free of crossings? We show that this question can be solved in polynomial time using a Hanani-Tutte style approach. If we require the drawing of $G$ to be straight-line, and allow at most one crossing along each edge in $F$, the problem turns out to be as hard as the existential theory of the real numbers.
1604.03698
Shinya Sugiura
Takumi Ishihara and Shinya Sugiura
Frequency-domain equalization aided iterative detection of faster-than-Nyquist signaling with noise whitening
6 pages, 6 figures; IEEE International Conference on Communications (ICC) 2016, Kuala Lumpur, Malaysia
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we propose a serially concatenated turbo-encoded faster-than-Nyquist signaling (FTNS) transceiver that takes into account FTNS-specific colored noise effects. The proposed low-complexity receiver carries out soft-decision frequency-domain equalization with the aid of the minimum-mean square error criterion while whitening the colored noise. Simulation results demonstrate that the proposed multi-stage-concatenated FTNS system achieves a better error-ratio performance than previous systems that do not consider colored noise effects in the high-symbol-packing FTNS regime. Furthermore, as an explicit benefit of the proposed iterative decoder, near-capacity performance is achieved with practical decoding complexity.
[ { "created": "Wed, 13 Apr 2016 09:07:42 GMT", "version": "v1" } ]
2016-04-14
[ [ "Ishihara", "Takumi", "" ], [ "Sugiura", "Shinya", "" ] ]
In this paper, we propose a serially concatenated turbo-encoded faster-than-Nyquist signaling (FTNS) transceiver that takes into account FTNS-specific colored noise effects. The proposed low-complexity receiver carries out soft-decision frequency-domain equalization with the aid of the minimum-mean square error criterion while whitening the colored noise. Simulation results demonstrate that the proposed multi-stage-concatenated FTNS system achieves a better error-ratio performance than previous systems that do not consider colored noise effects in the high-symbol-packing FTNS regime. Furthermore, as an explicit benefit of the proposed iterative decoder, near-capacity performance is achieved with practical decoding complexity.
2208.06569
Emon Dey
Emon Dey, Jumman Hossain, Nirmalya Roy, Carl Busart
SynchroSim: An Integrated Co-simulation Middleware for Heterogeneous Multi-robot System
null
null
null
null
cs.RO
http://creativecommons.org/licenses/by/4.0/
With the advancement of modern robotics, autonomous agents are now capable of hosting sophisticated algorithms, which enables them to make intelligent decisions. But developing and testing such algorithms directly in real-world systems is tedious and may result in the wastage of valuable resources. Especially for heterogeneous multi-agent systems in battlefield environments where communication is critical in determining the system's behavior and usability. Due to the necessity of simulators of separate paradigms (co-simulation) to simulate such scenarios before deploying, synchronization between those simulators is vital. Existing works aimed at resolving this issue fall short of addressing diversity among deployed agents. In this work, we propose \textit{SynchroSim}, an integrated co-simulation middleware to simulate a heterogeneous multi-robot system. Here we propose a velocity difference-driven adjustable window size approach with a view to reducing packet loss probability. It takes into account the respective velocities of deployed agents to calculate a suitable window size before transmitting data between them. We consider our algorithm-specific simulator agnostic but for the sake of implementation results, we have used Gazebo as a Physics simulator and NS-3 as a network simulator. Also, we design our algorithm considering the Perception-Action loop inside a closed communication channel, which is one of the essential factors in a contested scenario with the requirement of high fidelity in terms of data transmission. We validate our approach empirically at both the simulation and system level for both line-of-sight (LOS) and non-line-of-sight (NLOS) scenarios. Our approach achieves a noticeable improvement in terms of reducing packet loss probability ($\approx$11\%), and average packet delay ($\approx$10\%) compared to the fixed window size-based synchronization approach.
[ { "created": "Sat, 13 Aug 2022 04:34:06 GMT", "version": "v1" } ]
2022-08-16
[ [ "Dey", "Emon", "" ], [ "Hossain", "Jumman", "" ], [ "Roy", "Nirmalya", "" ], [ "Busart", "Carl", "" ] ]
With the advancement of modern robotics, autonomous agents are now capable of hosting sophisticated algorithms, which enables them to make intelligent decisions. But developing and testing such algorithms directly in real-world systems is tedious and may result in the wastage of valuable resources. Especially for heterogeneous multi-agent systems in battlefield environments where communication is critical in determining the system's behavior and usability. Due to the necessity of simulators of separate paradigms (co-simulation) to simulate such scenarios before deploying, synchronization between those simulators is vital. Existing works aimed at resolving this issue fall short of addressing diversity among deployed agents. In this work, we propose \textit{SynchroSim}, an integrated co-simulation middleware to simulate a heterogeneous multi-robot system. Here we propose a velocity difference-driven adjustable window size approach with a view to reducing packet loss probability. It takes into account the respective velocities of deployed agents to calculate a suitable window size before transmitting data between them. We consider our algorithm-specific simulator agnostic but for the sake of implementation results, we have used Gazebo as a Physics simulator and NS-3 as a network simulator. Also, we design our algorithm considering the Perception-Action loop inside a closed communication channel, which is one of the essential factors in a contested scenario with the requirement of high fidelity in terms of data transmission. We validate our approach empirically at both the simulation and system level for both line-of-sight (LOS) and non-line-of-sight (NLOS) scenarios. Our approach achieves a noticeable improvement in terms of reducing packet loss probability ($\approx$11\%), and average packet delay ($\approx$10\%) compared to the fixed window size-based synchronization approach.
2309.11001
Kaustubh Shivdikar
Kaustubh Shivdikar, Yuhui Bao, Rashmi Agrawal, Michael Shen, Gilbert Jonatan, Evelio Mora, Alexander Ingare, Neal Livesay, Jos\'e L. Abell\'an, John Kim, Ajay Joshi, David Kaeli
GME: GPU-based Microarchitectural Extensions to Accelerate Homomorphic Encryption
null
null
10.1145/3613424.3614279
null
cs.CR cs.AR
http://creativecommons.org/licenses/by/4.0/
Fully Homomorphic Encryption (FHE) enables the processing of encrypted data without decrypting it. FHE has garnered significant attention over the past decade as it supports secure outsourcing of data processing to remote cloud services. Despite its promise of strong data privacy and security guarantees, FHE introduces a slowdown of up to five orders of magnitude as compared to the same computation using plaintext data. This overhead is presently a major barrier to the commercial adoption of FHE. In this work, we leverage GPUs to accelerate FHE, capitalizing on a well-established GPU ecosystem available in the cloud. We propose GME, which combines three key microarchitectural extensions along with a compile-time optimization to the current AMD CDNA GPU architecture. First, GME integrates a lightweight on-chip compute unit (CU)-side hierarchical interconnect to retain ciphertext in cache across FHE kernels, thus eliminating redundant memory transactions. Second, to tackle compute bottlenecks, GME introduces special MOD-units that provide native custom hardware support for modular reduction operations, one of the most commonly executed sets of operations in FHE. Third, by integrating the MOD-unit with our novel pipelined $64$-bit integer arithmetic cores (WMAC-units), GME further accelerates FHE workloads by $19\%$. Finally, we propose a Locality-Aware Block Scheduler (LABS) that exploits the temporal locality available in FHE primitive blocks. Incorporating these microarchitectural features and compiler optimizations, we create a synergistic approach achieving average speedups of $796\times$, $14.2\times$, and $2.3\times$ over Intel Xeon CPU, NVIDIA V100 GPU, and Xilinx FPGA implementations, respectively.
[ { "created": "Wed, 20 Sep 2023 01:50:43 GMT", "version": "v1" } ]
2024-04-26
[ [ "Shivdikar", "Kaustubh", "" ], [ "Bao", "Yuhui", "" ], [ "Agrawal", "Rashmi", "" ], [ "Shen", "Michael", "" ], [ "Jonatan", "Gilbert", "" ], [ "Mora", "Evelio", "" ], [ "Ingare", "Alexander", "" ], [ "Livesay", "Neal", "" ], [ "Abellán", "José L.", "" ], [ "Kim", "John", "" ], [ "Joshi", "Ajay", "" ], [ "Kaeli", "David", "" ] ]
Fully Homomorphic Encryption (FHE) enables the processing of encrypted data without decrypting it. FHE has garnered significant attention over the past decade as it supports secure outsourcing of data processing to remote cloud services. Despite its promise of strong data privacy and security guarantees, FHE introduces a slowdown of up to five orders of magnitude as compared to the same computation using plaintext data. This overhead is presently a major barrier to the commercial adoption of FHE. In this work, we leverage GPUs to accelerate FHE, capitalizing on a well-established GPU ecosystem available in the cloud. We propose GME, which combines three key microarchitectural extensions along with a compile-time optimization to the current AMD CDNA GPU architecture. First, GME integrates a lightweight on-chip compute unit (CU)-side hierarchical interconnect to retain ciphertext in cache across FHE kernels, thus eliminating redundant memory transactions. Second, to tackle compute bottlenecks, GME introduces special MOD-units that provide native custom hardware support for modular reduction operations, one of the most commonly executed sets of operations in FHE. Third, by integrating the MOD-unit with our novel pipelined $64$-bit integer arithmetic cores (WMAC-units), GME further accelerates FHE workloads by $19\%$. Finally, we propose a Locality-Aware Block Scheduler (LABS) that exploits the temporal locality available in FHE primitive blocks. Incorporating these microarchitectural features and compiler optimizations, we create a synergistic approach achieving average speedups of $796\times$, $14.2\times$, and $2.3\times$ over Intel Xeon CPU, NVIDIA V100 GPU, and Xilinx FPGA implementations, respectively.
2402.07303
R\"udiger Valk
R\"udiger Valk
Analysing cycloids using linear algebra
12 pages, 6 figures
null
null
null
cs.DC
http://creativecommons.org/licenses/by/4.0/
Cycloids are particular Petri nets for modelling processes of actions or events. They belong to the fundaments of Petri's general systems theory and have very different interpretations, ranging from Einstein's relativity theory and elementary information processing gates to the modelling of interacting sequential processes. This article contains previously unpublished proofs of cycloid properties using linear algebra.
[ { "created": "Sun, 11 Feb 2024 20:45:45 GMT", "version": "v1" } ]
2024-02-13
[ [ "Valk", "Rüdiger", "" ] ]
Cycloids are particular Petri nets for modelling processes of actions or events. They belong to the fundaments of Petri's general systems theory and have very different interpretations, ranging from Einstein's relativity theory and elementary information processing gates to the modelling of interacting sequential processes. This article contains previously unpublished proofs of cycloid properties using linear algebra.
2203.12798
Jun-Gi Jang
Jun-Gi Jang and U Kang
DPar2: Fast and Scalable PARAFAC2 Decomposition for Irregular Dense Tensors
14 pages, 11 figures. To appear at the 38th IEEE International Conference on Data Engineering (ICDE '22)
null
null
null
cs.LG cs.DB cs.MS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Given an irregular dense tensor, how can we efficiently analyze it? An irregular tensor is a collection of matrices whose columns have the same size and rows have different sizes from each other. PARAFAC2 decomposition is a fundamental tool to deal with an irregular tensor in applications including phenotype discovery and trend analysis. Although several PARAFAC2 decomposition methods exist, their efficiency is limited for irregular dense tensors due to the expensive computations involved with the tensor. In this paper, we propose DPar2, a fast and scalable PARAFAC2 decomposition method for irregular dense tensors. DPar2 achieves high efficiency by effectively compressing each slice matrix of a given irregular tensor, careful reordering of computations with the compression results, and exploiting the irregularity of the tensor. Extensive experiments show that DPar2 is up to 6.0x faster than competitors on real-world irregular tensors while achieving comparable accuracy. In addition, DPar2 is scalable with respect to the tensor size and target rank.
[ { "created": "Thu, 24 Mar 2022 01:43:13 GMT", "version": "v1" }, { "created": "Thu, 2 Jun 2022 05:56:41 GMT", "version": "v2" } ]
2022-06-03
[ [ "Jang", "Jun-Gi", "" ], [ "Kang", "U", "" ] ]
Given an irregular dense tensor, how can we efficiently analyze it? An irregular tensor is a collection of matrices whose columns have the same size and rows have different sizes from each other. PARAFAC2 decomposition is a fundamental tool to deal with an irregular tensor in applications including phenotype discovery and trend analysis. Although several PARAFAC2 decomposition methods exist, their efficiency is limited for irregular dense tensors due to the expensive computations involved with the tensor. In this paper, we propose DPar2, a fast and scalable PARAFAC2 decomposition method for irregular dense tensors. DPar2 achieves high efficiency by effectively compressing each slice matrix of a given irregular tensor, careful reordering of computations with the compression results, and exploiting the irregularity of the tensor. Extensive experiments show that DPar2 is up to 6.0x faster than competitors on real-world irregular tensors while achieving comparable accuracy. In addition, DPar2 is scalable with respect to the tensor size and target rank.
2103.05346
Jihan Yang
Jihan Yang, Shaoshuai Shi, Zhe Wang, Hongsheng Li, Xiaojuan Qi
ST3D: Self-training for Unsupervised Domain Adaptation on 3D Object Detection
CVPR2021
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a new domain adaptive self-training pipeline, named ST3D, for unsupervised domain adaptation on 3D object detection from point clouds. First, we pre-train the 3D detector on the source domain with our proposed random object scaling strategy for mitigating the negative effects of source domain bias. Then, the detector is iteratively improved on the target domain by alternatively conducting two steps, which are the pseudo label updating with the developed quality-aware triplet memory bank and the model training with curriculum data augmentation. These specific designs for 3D object detection enable the detector to be trained with consistent and high-quality pseudo labels and to avoid overfitting to the large number of easy examples in pseudo labeled data. Our ST3D achieves state-of-the-art performance on all evaluated datasets and even surpasses fully supervised results on KITTI 3D object detection benchmark. Code will be available at https://github.com/CVMI-Lab/ST3D.
[ { "created": "Tue, 9 Mar 2021 10:51:24 GMT", "version": "v1" }, { "created": "Sat, 27 Mar 2021 07:36:13 GMT", "version": "v2" } ]
2021-03-30
[ [ "Yang", "Jihan", "" ], [ "Shi", "Shaoshuai", "" ], [ "Wang", "Zhe", "" ], [ "Li", "Hongsheng", "" ], [ "Qi", "Xiaojuan", "" ] ]
We present a new domain adaptive self-training pipeline, named ST3D, for unsupervised domain adaptation on 3D object detection from point clouds. First, we pre-train the 3D detector on the source domain with our proposed random object scaling strategy for mitigating the negative effects of source domain bias. Then, the detector is iteratively improved on the target domain by alternatively conducting two steps, which are the pseudo label updating with the developed quality-aware triplet memory bank and the model training with curriculum data augmentation. These specific designs for 3D object detection enable the detector to be trained with consistent and high-quality pseudo labels and to avoid overfitting to the large number of easy examples in pseudo labeled data. Our ST3D achieves state-of-the-art performance on all evaluated datasets and even surpasses fully supervised results on KITTI 3D object detection benchmark. Code will be available at https://github.com/CVMI-Lab/ST3D.
1009.4572
S. M. Kamruzzaman
S. M. Kamruzzaman, Ahmed Ryadh Hasan, Abu Bakar Siddiquee, and Md. Ehsanul Hoque Mazumder
Medical diagnosis using neural network
4 pages, International Conference
Proc. 3rd International Conference on Electrical & Computer Engineering (ICECE 2004), Dhaka Bangladesh, pp. 537-540, Dec. 2004
null
null
cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This research is to search for alternatives to the resolution of complex medical diagnosis where human knowledge should be apprehended in a general fashion. Successful application examples show that human diagnostic capabilities are significantly worse than the neural diagnostic system. This paper describes a modified feedforward neural network constructive algorithm (MFNNCA), a new algorithm for medical diagnosis. The new constructive algorithm with backpropagation; offer an approach for the incremental construction of near-minimal neural network architectures for pattern classification. The algorithm starts with minimal number of hidden units in the single hidden layer; additional units are added to the hidden layer one at a time to improve the accuracy of the network and to get an optimal size of a neural network. The MFNNCA was tested on several benchmarking classification problems including the cancer, heart disease and diabetes. Experimental results show that the MFNNCA can produce optimal neural network architecture with good generalization ability.
[ { "created": "Thu, 23 Sep 2010 10:44:24 GMT", "version": "v1" } ]
2010-09-28
[ [ "Kamruzzaman", "S. M.", "" ], [ "Hasan", "Ahmed Ryadh", "" ], [ "Siddiquee", "Abu Bakar", "" ], [ "Mazumder", "Md. Ehsanul Hoque", "" ] ]
This research is to search for alternatives to the resolution of complex medical diagnosis where human knowledge should be apprehended in a general fashion. Successful application examples show that human diagnostic capabilities are significantly worse than the neural diagnostic system. This paper describes a modified feedforward neural network constructive algorithm (MFNNCA), a new algorithm for medical diagnosis. The new constructive algorithm with backpropagation; offer an approach for the incremental construction of near-minimal neural network architectures for pattern classification. The algorithm starts with minimal number of hidden units in the single hidden layer; additional units are added to the hidden layer one at a time to improve the accuracy of the network and to get an optimal size of a neural network. The MFNNCA was tested on several benchmarking classification problems including the cancer, heart disease and diabetes. Experimental results show that the MFNNCA can produce optimal neural network architecture with good generalization ability.
1910.09573
David Madras
David Madras, James Atwood, Alex D'Amour
Detecting Underspecification with Local Ensembles
Published as a conference paper at ICLR 2020 under the title "Detecting Extrapolation with Local Ensembles"
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present local ensembles, a method for detecting underspecification -- when many possible predictors are consistent with the training data and model class -- at test time in a pre-trained model. Our method uses local second-order information to approximate the variance of predictions across an ensemble of models from the same class. We compute this approximation by estimating the norm of the component of a test point's gradient that aligns with the low-curvature directions of the Hessian, and provide a tractable method for estimating this quantity. Experimentally, we show that our method is capable of detecting when a pre-trained model is underspecified on test data, with applications to out-of-distribution detection, detecting spurious correlates, and active learning.
[ { "created": "Mon, 21 Oct 2019 18:05:52 GMT", "version": "v1" }, { "created": "Tue, 7 Dec 2021 20:58:12 GMT", "version": "v2" } ]
2021-12-09
[ [ "Madras", "David", "" ], [ "Atwood", "James", "" ], [ "D'Amour", "Alex", "" ] ]
We present local ensembles, a method for detecting underspecification -- when many possible predictors are consistent with the training data and model class -- at test time in a pre-trained model. Our method uses local second-order information to approximate the variance of predictions across an ensemble of models from the same class. We compute this approximation by estimating the norm of the component of a test point's gradient that aligns with the low-curvature directions of the Hessian, and provide a tractable method for estimating this quantity. Experimentally, we show that our method is capable of detecting when a pre-trained model is underspecified on test data, with applications to out-of-distribution detection, detecting spurious correlates, and active learning.
1504.05998
Dong Su
Dong Su, Jianneng Cao, Ninghui Li, Elisa Bertino, Hongxia Jin
Differentially Private $k$-Means Clustering
null
null
null
null
cs.CR
http://creativecommons.org/licenses/by/3.0/
There are two broad approaches for differentially private data analysis. The interactive approach aims at developing customized differentially private algorithms for various data mining tasks. The non-interactive approach aims at developing differentially private algorithms that can output a synopsis of the input dataset, which can then be used to support various data mining tasks. In this paper we study the tradeoff of interactive vs. non-interactive approaches and propose a hybrid approach that combines interactive and non-interactive, using $k$-means clustering as an example. In the hybrid approach to differentially private $k$-means clustering, one first uses a non-interactive mechanism to publish a synopsis of the input dataset, then applies the standard $k$-means clustering algorithm to learn $k$ cluster centroids, and finally uses an interactive approach to further improve these cluster centroids. We analyze the error behavior of both non-interactive and interactive approaches and use such analysis to decide how to allocate privacy budget between the non-interactive step and the interactive step. Results from extensive experiments support our analysis and demonstrate the effectiveness of our approach.
[ { "created": "Wed, 22 Apr 2015 22:21:30 GMT", "version": "v1" } ]
2015-04-24
[ [ "Su", "Dong", "" ], [ "Cao", "Jianneng", "" ], [ "Li", "Ninghui", "" ], [ "Bertino", "Elisa", "" ], [ "Jin", "Hongxia", "" ] ]
There are two broad approaches for differentially private data analysis. The interactive approach aims at developing customized differentially private algorithms for various data mining tasks. The non-interactive approach aims at developing differentially private algorithms that can output a synopsis of the input dataset, which can then be used to support various data mining tasks. In this paper we study the tradeoff of interactive vs. non-interactive approaches and propose a hybrid approach that combines interactive and non-interactive, using $k$-means clustering as an example. In the hybrid approach to differentially private $k$-means clustering, one first uses a non-interactive mechanism to publish a synopsis of the input dataset, then applies the standard $k$-means clustering algorithm to learn $k$ cluster centroids, and finally uses an interactive approach to further improve these cluster centroids. We analyze the error behavior of both non-interactive and interactive approaches and use such analysis to decide how to allocate privacy budget between the non-interactive step and the interactive step. Results from extensive experiments support our analysis and demonstrate the effectiveness of our approach.
1903.06442
Shiwen He
Shiwen He, Ju Ren, Jiaheng Wang, Yongming Huang, Yaoxue Zhang, Weihua Zhuang, and Sherman (Xuemin) Shen
Cloud-Edge Coordinated Processing: Low-Latency Multicasting Transmission
35 pages,9 figures, to appear in IEEE Journal on Selected Areas in Communications-Special Issue on Network Softwarization & Enablers
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recently, edge caching and multicasting arise as two promising technologies to support high-data-rate and low-latency delivery in wireless communication networks. In this paper, we design three transmission schemes aiming to minimize the delivery latency for cache-enabled multigroup multicasting networks. In particular, full caching bulk transmission scheme is first designed as a performance benchmark for the ideal situation where the caching capability of each enhanced remote radio head (eRRH) is sufficient large to cache all files. For the practical situation where the caching capability of each eRRH is limited, we further design two transmission schemes, namely partial caching bulk transmission (PCBT) and partial caching pipelined transmission (PCPT) schemes. In the PCBT scheme, eRRHs first fetch the uncached requested files from the baseband unit (BBU) and then all requested files are simultaneously transmitted to the users. In the PCPT scheme, eRRHs first transmit the cached requested files while fetching the uncached requested files from the BBU. Then, the remaining cached requested files and fetched uncached requested files are simultaneously transmitted to the users. The design goal of the three transmission schemes is to minimize the delivery latency, subject to some practical constraints. Efficient algorithms are developed for the low-latency cloud-edge coordinated transmission strategies. Numerical results are provided to evaluate the performance of the proposed transmission schemes and show that the PCPT scheme outperforms the PCBT scheme in terms of the delivery latency criterion.
[ { "created": "Fri, 15 Mar 2019 10:18:18 GMT", "version": "v1" } ]
2019-03-18
[ [ "He", "Shiwen", "", "Xuemin" ], [ "Ren", "Ju", "", "Xuemin" ], [ "Wang", "Jiaheng", "", "Xuemin" ], [ "Huang", "Yongming", "", "Xuemin" ], [ "Zhang", "Yaoxue", "", "Xuemin" ], [ "Zhuang", "Weihua", "", "Xuemin" ], [ "Sherman", "", "", "Xuemin" ], [ "Shen", "", "" ] ]
Recently, edge caching and multicasting arise as two promising technologies to support high-data-rate and low-latency delivery in wireless communication networks. In this paper, we design three transmission schemes aiming to minimize the delivery latency for cache-enabled multigroup multicasting networks. In particular, full caching bulk transmission scheme is first designed as a performance benchmark for the ideal situation where the caching capability of each enhanced remote radio head (eRRH) is sufficient large to cache all files. For the practical situation where the caching capability of each eRRH is limited, we further design two transmission schemes, namely partial caching bulk transmission (PCBT) and partial caching pipelined transmission (PCPT) schemes. In the PCBT scheme, eRRHs first fetch the uncached requested files from the baseband unit (BBU) and then all requested files are simultaneously transmitted to the users. In the PCPT scheme, eRRHs first transmit the cached requested files while fetching the uncached requested files from the BBU. Then, the remaining cached requested files and fetched uncached requested files are simultaneously transmitted to the users. The design goal of the three transmission schemes is to minimize the delivery latency, subject to some practical constraints. Efficient algorithms are developed for the low-latency cloud-edge coordinated transmission strategies. Numerical results are provided to evaluate the performance of the proposed transmission schemes and show that the PCPT scheme outperforms the PCBT scheme in terms of the delivery latency criterion.
1312.2299
Vasilis Syrgkanis
Nicole Immorlica, Greg Stoddard, Vasilis Syrgkanis
Social Status and Badge Design
null
null
null
null
cs.GT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many websites rely on user-generated content to provide value to consumers. These websites typically incentivize participation by awarding users badges based on their contributions. While these badges typically have no explicit value, they act as symbols of social status within a community. In this paper, we consider the design of badge mechanisms for the objective of maximizing the total contributions made to a website. Users exert costly effort to make contributions and, in return, are awarded with badges. A badge is only valued to the extent that it signals social status and thus badge valuations are determined endogenously by the number of users who earn each badge. The goal of this paper is to study the design of optimal and approximately badge mechanisms under these status valuations. We characterize badge mechanisms by whether they use a coarse partitioning scheme, i.e. awarding the same badge to many users, or use a fine partitioning scheme, i.e. awarding a unique badge to most users. We find that the optimal mechanism uses both fine partitioning and coarse partitioning. When status valuations exhibit a decreasing marginal value property, we prove that coarse partitioning is a necessary feature of any approximately optimal mechanism. Conversely, when status valuations exhibit an increasing marginal value property, we prove that fine partitioning is necessary for approximate optimality.
[ { "created": "Mon, 9 Dec 2013 03:18:18 GMT", "version": "v1" }, { "created": "Fri, 21 Feb 2014 01:41:12 GMT", "version": "v2" } ]
2014-02-24
[ [ "Immorlica", "Nicole", "" ], [ "Stoddard", "Greg", "" ], [ "Syrgkanis", "Vasilis", "" ] ]
Many websites rely on user-generated content to provide value to consumers. These websites typically incentivize participation by awarding users badges based on their contributions. While these badges typically have no explicit value, they act as symbols of social status within a community. In this paper, we consider the design of badge mechanisms for the objective of maximizing the total contributions made to a website. Users exert costly effort to make contributions and, in return, are awarded with badges. A badge is only valued to the extent that it signals social status and thus badge valuations are determined endogenously by the number of users who earn each badge. The goal of this paper is to study the design of optimal and approximately badge mechanisms under these status valuations. We characterize badge mechanisms by whether they use a coarse partitioning scheme, i.e. awarding the same badge to many users, or use a fine partitioning scheme, i.e. awarding a unique badge to most users. We find that the optimal mechanism uses both fine partitioning and coarse partitioning. When status valuations exhibit a decreasing marginal value property, we prove that coarse partitioning is a necessary feature of any approximately optimal mechanism. Conversely, when status valuations exhibit an increasing marginal value property, we prove that fine partitioning is necessary for approximate optimality.
2203.05051
John Howard
John J. Howard, Eli J. Laird, Yevgeniy B. Sirotin, Rebecca E. Rubin, Jerry L. Tipton, and Arun R. Vemury
Evaluating Proposed Fairness Models for Face Recognition Algorithms
null
null
null
null
cs.CV cs.CY cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The development of face recognition algorithms by academic and commercial organizations is growing rapidly due to the onset of deep learning and the widespread availability of training data. Though tests of face recognition algorithm performance indicate yearly performance gains, error rates for many of these systems differ based on the demographic composition of the test set. These "demographic differentials" in algorithm performance can contribute to unequal or unfair outcomes for certain groups of people, raising concerns with increased worldwide adoption of face recognition systems. Consequently, regulatory bodies in both the United States and Europe have proposed new rules requiring audits of biometric systems for "discriminatory impacts" (European Union Artificial Intelligence Act) and "fairness" (U.S. Federal Trade Commission). However, no standard for measuring fairness in biometric systems yet exists. This paper characterizes two proposed measures of face recognition algorithm fairness (fairness measures) from scientists in the U.S. and Europe. We find that both proposed methods are challenging to interpret when applied to disaggregated face recognition error rates as they are commonly experienced in practice. To address this, we propose a set of interpretability criteria, termed the Functional Fairness Measure Criteria (FFMC), that outlines a set of properties desirable in a face recognition algorithm fairness measure. We further develop a new fairness measure, the Gini Aggregation Rate for Biometric Equitability (GARBE), and show how, in conjunction with the Pareto optimization, this measure can be used to select among alternative algorithms based on the accuracy/fairness trade-space. Finally, we have open-sourced our dataset of machine-readable, demographically disaggregated error rates. We believe this is currently the largest open-source dataset of its kind.
[ { "created": "Wed, 9 Mar 2022 21:16:43 GMT", "version": "v1" } ]
2022-03-11
[ [ "Howard", "John J.", "" ], [ "Laird", "Eli J.", "" ], [ "Sirotin", "Yevgeniy B.", "" ], [ "Rubin", "Rebecca E.", "" ], [ "Tipton", "Jerry L.", "" ], [ "Vemury", "Arun R.", "" ] ]
The development of face recognition algorithms by academic and commercial organizations is growing rapidly due to the onset of deep learning and the widespread availability of training data. Though tests of face recognition algorithm performance indicate yearly performance gains, error rates for many of these systems differ based on the demographic composition of the test set. These "demographic differentials" in algorithm performance can contribute to unequal or unfair outcomes for certain groups of people, raising concerns with increased worldwide adoption of face recognition systems. Consequently, regulatory bodies in both the United States and Europe have proposed new rules requiring audits of biometric systems for "discriminatory impacts" (European Union Artificial Intelligence Act) and "fairness" (U.S. Federal Trade Commission). However, no standard for measuring fairness in biometric systems yet exists. This paper characterizes two proposed measures of face recognition algorithm fairness (fairness measures) from scientists in the U.S. and Europe. We find that both proposed methods are challenging to interpret when applied to disaggregated face recognition error rates as they are commonly experienced in practice. To address this, we propose a set of interpretability criteria, termed the Functional Fairness Measure Criteria (FFMC), that outlines a set of properties desirable in a face recognition algorithm fairness measure. We further develop a new fairness measure, the Gini Aggregation Rate for Biometric Equitability (GARBE), and show how, in conjunction with the Pareto optimization, this measure can be used to select among alternative algorithms based on the accuracy/fairness trade-space. Finally, we have open-sourced our dataset of machine-readable, demographically disaggregated error rates. We believe this is currently the largest open-source dataset of its kind.
2405.04913
Qi Lai
Qi Lai, Chi-Man Vong
Weakly-supervised Semantic Segmentation via Dual-stream Contrastive Learning of Cross-image Contextual Information
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Weakly supervised semantic segmentation (WSSS) aims at learning a semantic segmentation model with only image-level tags. Despite intensive research on deep learning approaches over a decade, there is still a significant performance gap between WSSS and full semantic segmentation. Most current WSSS methods always focus on a limited single image (pixel-wise) information while ignoring the valuable inter-image (semantic-wise) information. From this perspective, a novel end-to-end WSSS framework called DSCNet is developed along with two innovations: i) pixel-wise group contrast and semantic-wise graph contrast are proposed and introduced into the WSSS framework; ii) a novel dual-stream contrastive learning (DSCL) mechanism is designed to jointly handle pixel-wise and semantic-wise context information for better WSSS performance. Specifically, the pixel-wise group contrast learning (PGCL) and semantic-wise graph contrast learning (SGCL) tasks form a more comprehensive solution. Extensive experiments on PASCAL VOC and MS COCO benchmarks verify the superiority of DSCNet over SOTA approaches and baseline models.
[ { "created": "Wed, 8 May 2024 09:35:26 GMT", "version": "v1" } ]
2024-05-09
[ [ "Lai", "Qi", "" ], [ "Vong", "Chi-Man", "" ] ]
Weakly supervised semantic segmentation (WSSS) aims at learning a semantic segmentation model with only image-level tags. Despite intensive research on deep learning approaches over a decade, there is still a significant performance gap between WSSS and full semantic segmentation. Most current WSSS methods always focus on a limited single image (pixel-wise) information while ignoring the valuable inter-image (semantic-wise) information. From this perspective, a novel end-to-end WSSS framework called DSCNet is developed along with two innovations: i) pixel-wise group contrast and semantic-wise graph contrast are proposed and introduced into the WSSS framework; ii) a novel dual-stream contrastive learning (DSCL) mechanism is designed to jointly handle pixel-wise and semantic-wise context information for better WSSS performance. Specifically, the pixel-wise group contrast learning (PGCL) and semantic-wise graph contrast learning (SGCL) tasks form a more comprehensive solution. Extensive experiments on PASCAL VOC and MS COCO benchmarks verify the superiority of DSCNet over SOTA approaches and baseline models.
1901.10251
Orr Krupnik
Orr Krupnik, Igor Mordatch, Aviv Tamar
Multi-Agent Reinforcement Learning with Multi-Step Generative Models
null
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider model-based reinforcement learning (MBRL) in 2-agent, high-fidelity continuous control problems -- an important domain for robots interacting with other agents in the same workspace. For non-trivial dynamical systems, MBRL typically suffers from accumulating errors. Several recent studies have addressed this problem by learning latent variable models for trajectory segments and optimizing over behavior in the latent space. In this work, we investigate whether this approach can be extended to 2-agent competitive and cooperative settings. The fundamental challenge is how to learn models that capture interactions between agents, yet are disentangled to allow for optimization of each agent behavior separately. We propose such models based on a disentangled variational auto-encoder, and demonstrate our approach on a simulated 2-robot manipulation task, where one robot can either help or distract the other. We show that our approach has better sample efficiency than a strong model-free RL baseline, and can learn both cooperative and adversarial behavior from the same data.
[ { "created": "Tue, 29 Jan 2019 12:29:20 GMT", "version": "v1" }, { "created": "Fri, 19 Jul 2019 01:44:22 GMT", "version": "v2" }, { "created": "Fri, 1 Nov 2019 04:51:13 GMT", "version": "v3" } ]
2019-11-04
[ [ "Krupnik", "Orr", "" ], [ "Mordatch", "Igor", "" ], [ "Tamar", "Aviv", "" ] ]
We consider model-based reinforcement learning (MBRL) in 2-agent, high-fidelity continuous control problems -- an important domain for robots interacting with other agents in the same workspace. For non-trivial dynamical systems, MBRL typically suffers from accumulating errors. Several recent studies have addressed this problem by learning latent variable models for trajectory segments and optimizing over behavior in the latent space. In this work, we investigate whether this approach can be extended to 2-agent competitive and cooperative settings. The fundamental challenge is how to learn models that capture interactions between agents, yet are disentangled to allow for optimization of each agent behavior separately. We propose such models based on a disentangled variational auto-encoder, and demonstrate our approach on a simulated 2-robot manipulation task, where one robot can either help or distract the other. We show that our approach has better sample efficiency than a strong model-free RL baseline, and can learn both cooperative and adversarial behavior from the same data.
2002.12349
Vasileios Vasilopoulos
Vasileios Vasilopoulos, Georgios Pavlakos, Sean L. Bowman, J. Diego Caporale, Kostas Daniilidis, George J. Pappas, Daniel E. Koditschek
Technical Report: Reactive Semantic Planning in Unexplored Semantic Environments Using Deep Perceptual Feedback
Technical Report accompanying the paper "Reactive Semantic Planning in Unexplored Semantic Environments Using Deep Perceptual Feedback" (12 pages, 8 figures) - Using definitions and equations from arxiv:2002.08946
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents a reactive planning system that enriches the topological representation of an environment with a tightly integrated semantic representation, achieved by incorporating and exploiting advances in deep perceptual learning and probabilistic semantic reasoning. Our architecture combines object detection with semantic SLAM, affording robust, reactive logical as well as geometric planning in unexplored environments. Moreover, by incorporating a human mesh estimation algorithm, our system is capable of reacting and responding in real time to semantically labeled human motions and gestures. New formal results allow tracking of suitably non-adversarial moving targets, while maintaining the same collision avoidance guarantees. We suggest the empirical utility of the proposed control architecture with a numerical study including comparisons with a state-of-the-art dynamic replanning algorithm, and physical implementation on both a wheeled and legged platform in different settings with both geometric and semantic goals.
[ { "created": "Tue, 25 Feb 2020 23:02:30 GMT", "version": "v1" }, { "created": "Fri, 28 Feb 2020 04:55:13 GMT", "version": "v2" }, { "created": "Mon, 4 May 2020 16:54:22 GMT", "version": "v3" } ]
2020-05-05
[ [ "Vasilopoulos", "Vasileios", "" ], [ "Pavlakos", "Georgios", "" ], [ "Bowman", "Sean L.", "" ], [ "Caporale", "J. Diego", "" ], [ "Daniilidis", "Kostas", "" ], [ "Pappas", "George J.", "" ], [ "Koditschek", "Daniel E.", "" ] ]
This paper presents a reactive planning system that enriches the topological representation of an environment with a tightly integrated semantic representation, achieved by incorporating and exploiting advances in deep perceptual learning and probabilistic semantic reasoning. Our architecture combines object detection with semantic SLAM, affording robust, reactive logical as well as geometric planning in unexplored environments. Moreover, by incorporating a human mesh estimation algorithm, our system is capable of reacting and responding in real time to semantically labeled human motions and gestures. New formal results allow tracking of suitably non-adversarial moving targets, while maintaining the same collision avoidance guarantees. We suggest the empirical utility of the proposed control architecture with a numerical study including comparisons with a state-of-the-art dynamic replanning algorithm, and physical implementation on both a wheeled and legged platform in different settings with both geometric and semantic goals.
2209.15167
Heinrich Dinkel
Heinrich Dinkel, Zhiyong Yan, Yongqing Wang, Junbo Zhang, Yujun Wang
An empirical study of weakly supervised audio tagging embeddings for general audio representations
Odyssey 2022
null
10.21437/Odyssey.2022-54
null
cs.SD eess.AS
http://creativecommons.org/licenses/by/4.0/
We study the usability of pre-trained weakly supervised audio tagging (AT) models as feature extractors for general audio representations. We mainly analyze the feasibility of transferring those embeddings to other tasks within the speech and sound domains. Specifically, we benchmark weakly supervised pre-trained models (MobileNetV2 and EfficientNet-B0) against modern self-supervised learning methods (BYOL-A) as feature extractors. Fourteen downstream tasks are used for evaluation ranging from music instrument classification to language classification. Our results indicate that AT pre-trained models are an excellent transfer learning choice for music, event, and emotion recognition tasks. Further, finetuning AT models can also benefit speech-related tasks such as keyword spotting and intent classification.
[ { "created": "Fri, 30 Sep 2022 01:35:36 GMT", "version": "v1" } ]
2022-10-03
[ [ "Dinkel", "Heinrich", "" ], [ "Yan", "Zhiyong", "" ], [ "Wang", "Yongqing", "" ], [ "Zhang", "Junbo", "" ], [ "Wang", "Yujun", "" ] ]
We study the usability of pre-trained weakly supervised audio tagging (AT) models as feature extractors for general audio representations. We mainly analyze the feasibility of transferring those embeddings to other tasks within the speech and sound domains. Specifically, we benchmark weakly supervised pre-trained models (MobileNetV2 and EfficientNet-B0) against modern self-supervised learning methods (BYOL-A) as feature extractors. Fourteen downstream tasks are used for evaluation ranging from music instrument classification to language classification. Our results indicate that AT pre-trained models are an excellent transfer learning choice for music, event, and emotion recognition tasks. Further, finetuning AT models can also benefit speech-related tasks such as keyword spotting and intent classification.
1809.00509
Diego Esteves
Aniketh Janardhan Reddy and Gil Rocha and Diego Esteves
DeFactoNLP: Fact Verification using Entity Recognition, TFIDF Vector Comparison and Decomposable Attention
null
EMNLP 2018: Conference on Empirical Methods in Natural Language Processing (The First Workshop on Fact Extraction and Verification)
null
null
cs.AI cs.CL cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we describe DeFactoNLP, the system we designed for the FEVER 2018 Shared Task. The aim of this task was to conceive a system that can not only automatically assess the veracity of a claim but also retrieve evidence supporting this assessment from Wikipedia. In our approach, the Wikipedia documents whose Term Frequency-Inverse Document Frequency (TFIDF) vectors are most similar to the vector of the claim and those documents whose names are similar to those of the named entities (NEs) mentioned in the claim are identified as the documents which might contain evidence. The sentences in these documents are then supplied to a textual entailment recognition module. This module calculates the probability of each sentence supporting the claim, contradicting the claim or not providing any relevant information to assess the veracity of the claim. Various features computed using these probabilities are finally used by a Random Forest classifier to determine the overall truthfulness of the claim. The sentences which support this classification are returned as evidence. Our approach achieved a 0.4277 evidence F1-score, a 0.5136 label accuracy and a 0.3833 FEVER score.
[ { "created": "Mon, 3 Sep 2018 09:07:17 GMT", "version": "v1" } ]
2018-09-10
[ [ "Reddy", "Aniketh Janardhan", "" ], [ "Rocha", "Gil", "" ], [ "Esteves", "Diego", "" ] ]
In this paper, we describe DeFactoNLP, the system we designed for the FEVER 2018 Shared Task. The aim of this task was to conceive a system that can not only automatically assess the veracity of a claim but also retrieve evidence supporting this assessment from Wikipedia. In our approach, the Wikipedia documents whose Term Frequency-Inverse Document Frequency (TFIDF) vectors are most similar to the vector of the claim and those documents whose names are similar to those of the named entities (NEs) mentioned in the claim are identified as the documents which might contain evidence. The sentences in these documents are then supplied to a textual entailment recognition module. This module calculates the probability of each sentence supporting the claim, contradicting the claim or not providing any relevant information to assess the veracity of the claim. Various features computed using these probabilities are finally used by a Random Forest classifier to determine the overall truthfulness of the claim. The sentences which support this classification are returned as evidence. Our approach achieved a 0.4277 evidence F1-score, a 0.5136 label accuracy and a 0.3833 FEVER score.
1805.02751
Noah Apthorpe
Gordon Chu, Noah Apthorpe, Nick Feamster
Security and Privacy Analyses of Internet of Things Children's Toys
8 pages, 8 figures; publication version
IEEE Internet of Things Journal (IoT-J), 2018
10.1109/JIOT.2018.2866423
null
cs.CR cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper investigates the security and privacy of Internet-connected children's smart toys through case studies of three commercially-available products. We conduct network and application vulnerability analyses of each toy using static and dynamic analysis techniques, including application binary decompilation and network monitoring. We discover several publicly undisclosed vulnerabilities that violate the Children's Online Privacy Protection Rule (COPPA) as well as the toys' individual privacy policies. These vulnerabilities, especially security flaws in network communications with first-party servers, are indicative of a disconnect between many IoT toy developers and security and privacy best practices despite increased attention to Internet-connected toy hacking risks.
[ { "created": "Mon, 7 May 2018 21:23:47 GMT", "version": "v1" }, { "created": "Wed, 29 Aug 2018 00:35:58 GMT", "version": "v2" } ]
2018-08-30
[ [ "Chu", "Gordon", "" ], [ "Apthorpe", "Noah", "" ], [ "Feamster", "Nick", "" ] ]
This paper investigates the security and privacy of Internet-connected children's smart toys through case studies of three commercially-available products. We conduct network and application vulnerability analyses of each toy using static and dynamic analysis techniques, including application binary decompilation and network monitoring. We discover several publicly undisclosed vulnerabilities that violate the Children's Online Privacy Protection Rule (COPPA) as well as the toys' individual privacy policies. These vulnerabilities, especially security flaws in network communications with first-party servers, are indicative of a disconnect between many IoT toy developers and security and privacy best practices despite increased attention to Internet-connected toy hacking risks.
1504.05694
Serge Egelman
Linda Lee, Serge Egelman, Joong Hwa Lee, David Wagner
Risk Perceptions for Wearable Devices
null
null
null
null
cs.CY cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Wearable devices, or "wearables," bring great benefits but also potential risks that could expose users' activities with- out their awareness or consent. In this paper, we report findings from the first large-scale survey conducted to investigate user security and privacy concerns regarding wearables. We surveyed 1,782 Internet users in order to identify risks that are particularly concerning to them; these risks are inspired by the sensor inputs and applications of popular wearable technologies. During this experiment, our questions controlled for the effects of what data was being accessed and with whom it was being shared. We also investigated how these emergent threats compared to existent mobile threats, how upcoming capabilities and artifacts compared to existing technologies, and how users ranked technical and nontechnical concerns to sketch a concrete and broad view of the wearable device landscape. We hope that this work will inform the design of future user notification, permission management, and access control schemes for wearables.
[ { "created": "Wed, 22 Apr 2015 08:44:23 GMT", "version": "v1" } ]
2015-04-23
[ [ "Lee", "Linda", "" ], [ "Egelman", "Serge", "" ], [ "Lee", "Joong Hwa", "" ], [ "Wagner", "David", "" ] ]
Wearable devices, or "wearables," bring great benefits but also potential risks that could expose users' activities with- out their awareness or consent. In this paper, we report findings from the first large-scale survey conducted to investigate user security and privacy concerns regarding wearables. We surveyed 1,782 Internet users in order to identify risks that are particularly concerning to them; these risks are inspired by the sensor inputs and applications of popular wearable technologies. During this experiment, our questions controlled for the effects of what data was being accessed and with whom it was being shared. We also investigated how these emergent threats compared to existent mobile threats, how upcoming capabilities and artifacts compared to existing technologies, and how users ranked technical and nontechnical concerns to sketch a concrete and broad view of the wearable device landscape. We hope that this work will inform the design of future user notification, permission management, and access control schemes for wearables.
2108.07151
Yi Wang
Yi Wang, Yuchen He, Xutian Deng, Ziwei Lei, Yiting Chen, Miao Li
Learning Friction Model for Tethered Capsule Robot
ICRAE 2021 Conference paper
null
null
null
cs.RO
http://creativecommons.org/licenses/by/4.0/
With the potential applications of capsule robots in medical endoscopy, accurate dynamic control of the capsule robot is becoming more and more important. In the scale of a capsule robot, the friction between capsule and the environment plays an essential role in the dynamic model, which is usually difficult to model beforehand. In the paper, a tethered capsule robot system driven by a robot manipulator is built, where a strong magnetic Halbach array is mounted on the robot's end-effector to adjust the state of the capsule. To increase the control accuracy, the friction between capsule and the environment is learned with demonstrated trajectories. With the learned friction model, experimental results demonstrate an improvement of 5.6% in terms of tracking error.
[ { "created": "Mon, 16 Aug 2021 15:23:54 GMT", "version": "v1" } ]
2021-08-17
[ [ "Wang", "Yi", "" ], [ "He", "Yuchen", "" ], [ "Deng", "Xutian", "" ], [ "Lei", "Ziwei", "" ], [ "Chen", "Yiting", "" ], [ "Li", "Miao", "" ] ]
With the potential applications of capsule robots in medical endoscopy, accurate dynamic control of the capsule robot is becoming more and more important. In the scale of a capsule robot, the friction between capsule and the environment plays an essential role in the dynamic model, which is usually difficult to model beforehand. In the paper, a tethered capsule robot system driven by a robot manipulator is built, where a strong magnetic Halbach array is mounted on the robot's end-effector to adjust the state of the capsule. To increase the control accuracy, the friction between capsule and the environment is learned with demonstrated trajectories. With the learned friction model, experimental results demonstrate an improvement of 5.6% in terms of tracking error.
2406.14227
Hristo Venev
Hristo Venev and Timon Gehr and Dimitar Dimitrov and Martin Vechev
Modular Synthesis of Efficient Quantum Uncomputation
25 pages, 9 figures
null
null
null
cs.PL
http://creativecommons.org/licenses/by/4.0/
A key challenge of quantum programming is uncomputation: the reversible deallocation of qubits. And while there has been much recent progress on automating uncomputation, state-of-the-art methods are insufficient for handling today's expressive quantum programming languages. A core reason is that they operate on primitive quantum circuits, while quantum programs express computations beyond circuits, for instance, they can capture families of circuits defined recursively in terms of uncomputation and adjoints. In this paper, we introduce the first modular automatic approach to synthesize correct and efficient uncomputation for expressive quantum programs. Our method is based on two core technical contributions: (i) an intermediate representation (IR) that can capture expressive quantum programs and comes with support for uncomputation, and (ii) modular algorithms over that IR for synthesizing uncomputation and adjoints. We have built a complete end-to-end implementation of our method, including an implementation of the IR and the synthesis algorithms, as well as a translation from an expressive fragment of the Silq programming language to our IR and circuit generation from the IR. Our experimental evaluation demonstrates that we can handle programs beyond the capabilities of existing uncomputation approaches, while being competitive on the benchmarks they can handle. More broadly, we show that it is possible to benefit from the greater expressivity and safety offered by high-level quantum languages without sacrificing efficiency.
[ { "created": "Thu, 20 Jun 2024 11:47:45 GMT", "version": "v1" } ]
2024-06-21
[ [ "Venev", "Hristo", "" ], [ "Gehr", "Timon", "" ], [ "Dimitrov", "Dimitar", "" ], [ "Vechev", "Martin", "" ] ]
A key challenge of quantum programming is uncomputation: the reversible deallocation of qubits. And while there has been much recent progress on automating uncomputation, state-of-the-art methods are insufficient for handling today's expressive quantum programming languages. A core reason is that they operate on primitive quantum circuits, while quantum programs express computations beyond circuits, for instance, they can capture families of circuits defined recursively in terms of uncomputation and adjoints. In this paper, we introduce the first modular automatic approach to synthesize correct and efficient uncomputation for expressive quantum programs. Our method is based on two core technical contributions: (i) an intermediate representation (IR) that can capture expressive quantum programs and comes with support for uncomputation, and (ii) modular algorithms over that IR for synthesizing uncomputation and adjoints. We have built a complete end-to-end implementation of our method, including an implementation of the IR and the synthesis algorithms, as well as a translation from an expressive fragment of the Silq programming language to our IR and circuit generation from the IR. Our experimental evaluation demonstrates that we can handle programs beyond the capabilities of existing uncomputation approaches, while being competitive on the benchmarks they can handle. More broadly, we show that it is possible to benefit from the greater expressivity and safety offered by high-level quantum languages without sacrificing efficiency.
2305.05607
Adam Thorpe
Adam J. Thorpe
Refining Human-Centered Autonomy Using Side Information
null
HCPS 2023 Workshop on Humans in Cyber-Physical Systems (HCPS 2023), part of CPS-IoT Week
null
null
cs.HC cs.RO cs.SY eess.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Data-driven algorithms for human-centered autonomy use observed data to compute models of human behavior in order to ensure safety, correctness, and to avoid potential errors that arise at runtime. However, such algorithms often neglect useful a priori knowledge, known as side information, that can improve the quality of data-driven models. We identify several key challenges in human-centered autonomy, and identify possible approaches to incorporate side information in data-driven models of human behavior.
[ { "created": "Tue, 9 May 2023 16:57:19 GMT", "version": "v1" } ]
2023-05-10
[ [ "Thorpe", "Adam J.", "" ] ]
Data-driven algorithms for human-centered autonomy use observed data to compute models of human behavior in order to ensure safety, correctness, and to avoid potential errors that arise at runtime. However, such algorithms often neglect useful a priori knowledge, known as side information, that can improve the quality of data-driven models. We identify several key challenges in human-centered autonomy, and identify possible approaches to incorporate side information in data-driven models of human behavior.
2206.00471
Lu Han
Lu Han, Han-Jia Ye, De-Chuan Zhan
Augmentation Component Analysis: Modeling Similarity via the Augmentation Overlaps
Accept to ICLR 2023
null
null
null
cs.LG cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Self-supervised learning aims to learn a embedding space where semantically similar samples are close. Contrastive learning methods pull views of samples together and push different samples away, which utilizes semantic invariance of augmentation but ignores the relationship between samples. To better exploit the power of augmentation, we observe that semantically similar samples are more likely to have similar augmented views. Therefore, we can take the augmented views as a special description of a sample. In this paper, we model such a description as the augmentation distribution and we call it augmentation feature. The similarity in augmentation feature reflects how much the views of two samples overlap and is related to their semantical similarity. Without computational burdens to explicitly estimate values of the augmentation feature, we propose Augmentation Component Analysis (ACA) with a contrastive-like loss to learn principal components and an on-the-fly projection loss to embed data. ACA equals an efficient dimension reduction by PCA and extracts low-dimensional embeddings, theoretically preserving the similarity of augmentation distribution between samples. Empirical results show our method can achieve competitive results against various traditional contrastive learning methods on different benchmarks.
[ { "created": "Wed, 1 Jun 2022 13:03:58 GMT", "version": "v1" }, { "created": "Thu, 2 Feb 2023 12:40:25 GMT", "version": "v2" }, { "created": "Thu, 16 Feb 2023 15:12:39 GMT", "version": "v3" } ]
2023-02-17
[ [ "Han", "Lu", "" ], [ "Ye", "Han-Jia", "" ], [ "Zhan", "De-Chuan", "" ] ]
Self-supervised learning aims to learn a embedding space where semantically similar samples are close. Contrastive learning methods pull views of samples together and push different samples away, which utilizes semantic invariance of augmentation but ignores the relationship between samples. To better exploit the power of augmentation, we observe that semantically similar samples are more likely to have similar augmented views. Therefore, we can take the augmented views as a special description of a sample. In this paper, we model such a description as the augmentation distribution and we call it augmentation feature. The similarity in augmentation feature reflects how much the views of two samples overlap and is related to their semantical similarity. Without computational burdens to explicitly estimate values of the augmentation feature, we propose Augmentation Component Analysis (ACA) with a contrastive-like loss to learn principal components and an on-the-fly projection loss to embed data. ACA equals an efficient dimension reduction by PCA and extracts low-dimensional embeddings, theoretically preserving the similarity of augmentation distribution between samples. Empirical results show our method can achieve competitive results against various traditional contrastive learning methods on different benchmarks.
1904.10927
Tamara Radivilova A
Lyudmyla Kirichenko and Tamara Radivilova and Illya Zinkevich
Forecasting Weakly Correlated Time Series in Tasks of Electronic Commerce
4 pages, 4 figures, 1 table
2017 12th International Scientific and Technical Conference on Computer Sciences and Information Technologies (CSIT)
10.1109/STC-CSIT.2017.8098793
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Forecasting of weakly correlated time series of conversion rate by methods of exponential smoothing, neural network and decision tree on the example of conversion percent series for an electronic store is considered in the paper. The advantages and disadvantages of each method are considered.
[ { "created": "Tue, 16 Apr 2019 11:41:34 GMT", "version": "v1" } ]
2019-04-25
[ [ "Kirichenko", "Lyudmyla", "" ], [ "Radivilova", "Tamara", "" ], [ "Zinkevich", "Illya", "" ] ]
Forecasting of weakly correlated time series of conversion rate by methods of exponential smoothing, neural network and decision tree on the example of conversion percent series for an electronic store is considered in the paper. The advantages and disadvantages of each method are considered.
1602.03719
Luka Krapic
Aleksander Klju\v{c}ev\v{s}ek, Luka Krapi\'c
Discovering novel ingredient pairings in molecular gastronomy using network analysis
null
null
null
null
cs.SI physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Molecular gastronomy is a distinct sub-discipline of food science that takes an active role in examining chemical and physical properties of ingredients and as such lends itself to more scientific approaches to finding novel ingredient pairings. With thousands of ingredients and molecules, which participate in the creation of each ingredient's flavour, it can be difficult to find compatible flavours in an efficient manner. Existing literature is focused mainly on analysis of already established cuisine based on the flavour profile of its ingredients, but fails to consider the potential in finding flavour compatibility for use in creation of completely new recipes. Expressing relationships between ingredients and their molecular structure as a bipartite network opens up this problem to effective analysis with methods from network science. We describe a series of experiments on a database of food using network analysis, which produce a set of compatible ingredients that can be used in creation of new recipes. We expect this approach and its results to dramatically simplify the creation of new recipes with previously unseen and fresh combinations of ingredients.
[ { "created": "Thu, 11 Feb 2016 13:20:53 GMT", "version": "v1" } ]
2016-02-12
[ [ "Ključevšek", "Aleksander", "" ], [ "Krapić", "Luka", "" ] ]
Molecular gastronomy is a distinct sub-discipline of food science that takes an active role in examining chemical and physical properties of ingredients and as such lends itself to more scientific approaches to finding novel ingredient pairings. With thousands of ingredients and molecules, which participate in the creation of each ingredient's flavour, it can be difficult to find compatible flavours in an efficient manner. Existing literature is focused mainly on analysis of already established cuisine based on the flavour profile of its ingredients, but fails to consider the potential in finding flavour compatibility for use in creation of completely new recipes. Expressing relationships between ingredients and their molecular structure as a bipartite network opens up this problem to effective analysis with methods from network science. We describe a series of experiments on a database of food using network analysis, which produce a set of compatible ingredients that can be used in creation of new recipes. We expect this approach and its results to dramatically simplify the creation of new recipes with previously unseen and fresh combinations of ingredients.
2205.08438
Alexander Brownlee Dr
Alexander Brownlee, Martin Pelikan, John McCall, and Andrei Petrovski
An Application of a Multivariate Estimation of Distribution Algorithm to Cancer Chemotherapy
Tech report, originally published at Missouri EDA Lab, in support of extended abstract (poster) with same title presented at GECCO 2008
null
null
null
cs.AI q-bio.QM
http://creativecommons.org/licenses/by/4.0/
Chemotherapy treatment for cancer is a complex optimisation problem with a large number of interacting variables and constraints. A number of different probabilistic algorithms have been applied to it with varying success. In this paper we expand on this by applying two estimation of distribution algorithms to the problem. One is UMDA, which uses a univariate probabilistic model similar to previously applied EDAs. The other is hBOA, the first EDA using a multivariate probabilistic model to be applied to the chemotherapy problem. While instinct would lead us to predict that the more sophisticated algorithm would yield better performance on a complex problem like this, we show that it is outperformed by the algorithms using the simpler univariate model. We hypothesise that this is caused by the more sophisticated algorithm being impeded by the large number of interactions in the problem which are unnecessary for its solution.
[ { "created": "Tue, 17 May 2022 15:28:46 GMT", "version": "v1" } ]
2022-05-18
[ [ "Brownlee", "Alexander", "" ], [ "Pelikan", "Martin", "" ], [ "McCall", "John", "" ], [ "Petrovski", "Andrei", "" ] ]
Chemotherapy treatment for cancer is a complex optimisation problem with a large number of interacting variables and constraints. A number of different probabilistic algorithms have been applied to it with varying success. In this paper we expand on this by applying two estimation of distribution algorithms to the problem. One is UMDA, which uses a univariate probabilistic model similar to previously applied EDAs. The other is hBOA, the first EDA using a multivariate probabilistic model to be applied to the chemotherapy problem. While instinct would lead us to predict that the more sophisticated algorithm would yield better performance on a complex problem like this, we show that it is outperformed by the algorithms using the simpler univariate model. We hypothesise that this is caused by the more sophisticated algorithm being impeded by the large number of interactions in the problem which are unnecessary for its solution.
1506.06096
Dorina Thanou
Dorina Thanou, Philip A. Chou, and Pascal Frossard
Graph-based compression of dynamic 3D point cloud sequences
null
null
10.1109/TIP.2016.2529506
null
cs.CV cs.GR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper addresses the problem of compression of 3D point cloud sequences that are characterized by moving 3D positions and color attributes. As temporally successive point cloud frames are similar, motion estimation is key to effective compression of these sequences. It however remains a challenging problem as the point cloud frames have varying numbers of points without explicit correspondence information. We represent the time-varying geometry of these sequences with a set of graphs, and consider 3D positions and color attributes of the points clouds as signals on the vertices of the graphs. We then cast motion estimation as a feature matching problem between successive graphs. The motion is estimated on a sparse set of representative vertices using new spectral graph wavelet descriptors. A dense motion field is eventually interpolated by solving a graph-based regularization problem. The estimated motion is finally used for removing the temporal redundancy in the predictive coding of the 3D positions and the color characteristics of the point cloud sequences. Experimental results demonstrate that our method is able to accurately estimate the motion between consecutive frames. Moreover, motion estimation is shown to bring significant improvement in terms of the overall compression performance of the sequence. To the best of our knowledge, this is the first paper that exploits both the spatial correlation inside each frame (through the graph) and the temporal correlation between the frames (through the motion estimation) to compress the color and the geometry of 3D point cloud sequences in an efficient way.
[ { "created": "Fri, 19 Jun 2015 17:31:34 GMT", "version": "v1" } ]
2016-08-24
[ [ "Thanou", "Dorina", "" ], [ "Chou", "Philip A.", "" ], [ "Frossard", "Pascal", "" ] ]
This paper addresses the problem of compression of 3D point cloud sequences that are characterized by moving 3D positions and color attributes. As temporally successive point cloud frames are similar, motion estimation is key to effective compression of these sequences. It however remains a challenging problem as the point cloud frames have varying numbers of points without explicit correspondence information. We represent the time-varying geometry of these sequences with a set of graphs, and consider 3D positions and color attributes of the points clouds as signals on the vertices of the graphs. We then cast motion estimation as a feature matching problem between successive graphs. The motion is estimated on a sparse set of representative vertices using new spectral graph wavelet descriptors. A dense motion field is eventually interpolated by solving a graph-based regularization problem. The estimated motion is finally used for removing the temporal redundancy in the predictive coding of the 3D positions and the color characteristics of the point cloud sequences. Experimental results demonstrate that our method is able to accurately estimate the motion between consecutive frames. Moreover, motion estimation is shown to bring significant improvement in terms of the overall compression performance of the sequence. To the best of our knowledge, this is the first paper that exploits both the spatial correlation inside each frame (through the graph) and the temporal correlation between the frames (through the motion estimation) to compress the color and the geometry of 3D point cloud sequences in an efficient way.
1811.11044
Peng Wei
Peng Wei, Yue Xiao, Lilin Dan, Shichao Lv, and Wei Xiang
Performance Analysis of Low-Interference N-Continuous OFDM
15 pages, 14 figures
null
10.23919/JCC.2022.11.012
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The low-interference N-continuous orthogonal frequency division multiplexing (NC-OFDM) system [25], [26] is investigated in terms of power spectrum density (PSD) and bit error rate (BER), to prove and quantify its advantages over traditional NC-OFDM. The PSD and BER performances of the low-interference scheme are analyzed and compared under the parameters of the highest derivative order (HDO) and the length of the smooth signal. In the context of PSD, different from one discontinuous point per NC-OFDM symbol in [25], the sidelobe suppression performance is evaluated upon considering two discontinuous points due to the finite continuity of the smooth signal and its higher-order derivatives. It was shown that with an increased HDO and an increased length of the smooth signal, a more rapid sidelobe decaying is achieved, for the significant continuity improvement of the OFDM signal and its higher-order derivatives. However, our PSD analysis also shows that if the length of the smooth signal is set inappropriately, the performance may be degraded, even if the HDO is large. Furthermore, it was shown in the error performance analysis that under the assumptions of perfect and imperfect synchronization, the low-interference scheme incurs small BER performance degradation for a short length of the smooth signal or a small HDO as opposed to conventional NC-OFDM. Based on analysis and simulation results, the trade-offs between sidelobe suppression and BER are studied with the above two parameters.
[ { "created": "Tue, 27 Nov 2018 15:08:28 GMT", "version": "v1" }, { "created": "Wed, 5 Dec 2018 17:04:27 GMT", "version": "v2" }, { "created": "Tue, 3 Nov 2020 08:09:03 GMT", "version": "v3" } ]
2023-03-30
[ [ "Wei", "Peng", "" ], [ "Xiao", "Yue", "" ], [ "Dan", "Lilin", "" ], [ "Lv", "Shichao", "" ], [ "Xiang", "Wei", "" ] ]
The low-interference N-continuous orthogonal frequency division multiplexing (NC-OFDM) system [25], [26] is investigated in terms of power spectrum density (PSD) and bit error rate (BER), to prove and quantify its advantages over traditional NC-OFDM. The PSD and BER performances of the low-interference scheme are analyzed and compared under the parameters of the highest derivative order (HDO) and the length of the smooth signal. In the context of PSD, different from one discontinuous point per NC-OFDM symbol in [25], the sidelobe suppression performance is evaluated upon considering two discontinuous points due to the finite continuity of the smooth signal and its higher-order derivatives. It was shown that with an increased HDO and an increased length of the smooth signal, a more rapid sidelobe decaying is achieved, for the significant continuity improvement of the OFDM signal and its higher-order derivatives. However, our PSD analysis also shows that if the length of the smooth signal is set inappropriately, the performance may be degraded, even if the HDO is large. Furthermore, it was shown in the error performance analysis that under the assumptions of perfect and imperfect synchronization, the low-interference scheme incurs small BER performance degradation for a short length of the smooth signal or a small HDO as opposed to conventional NC-OFDM. Based on analysis and simulation results, the trade-offs between sidelobe suppression and BER are studied with the above two parameters.
1407.1065
Mahdi Soltanolkotabi
Emmanuel Candes, Xiaodong Li, Mahdi Soltanolkotabi
Phase Retrieval via Wirtinger Flow: Theory and Algorithms
IEEE Transactions on Information Theory, Vol. 64 (4), Feb. 2015
null
10.1109/TIT.2015.2399924
null
cs.IT math.FA math.IT math.NA math.OC math.ST stat.TH
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the problem of recovering the phase from magnitude measurements; specifically, we wish to reconstruct a complex-valued signal x of C^n about which we have phaseless samples of the form y_r = |< a_r,x >|^2, r = 1,2,...,m (knowledge of the phase of these samples would yield a linear system). This paper develops a non-convex formulation of the phase retrieval problem as well as a concrete solution algorithm. In a nutshell, this algorithm starts with a careful initialization obtained by means of a spectral method, and then refines this initial estimate by iteratively applying novel update rules, which have low computational complexity, much like in a gradient descent scheme. The main contribution is that this algorithm is shown to rigorously allow the exact retrieval of phase information from a nearly minimal number of random measurements. Indeed, the sequence of successive iterates provably converges to the solution at a geometric rate so that the proposed scheme is efficient both in terms of computational and data resources. In theory, a variation on this scheme leads to a near-linear time algorithm for a physically realizable model based on coded diffraction patterns. We illustrate the effectiveness of our methods with various experiments on image data. Underlying our analysis are insights for the analysis of non-convex optimization schemes that may have implications for computational problems beyond phase retrieval.
[ { "created": "Thu, 3 Jul 2014 21:14:47 GMT", "version": "v1" }, { "created": "Tue, 3 Feb 2015 08:31:04 GMT", "version": "v2" }, { "created": "Tue, 24 Nov 2015 07:03:41 GMT", "version": "v3" } ]
2016-11-17
[ [ "Candes", "Emmanuel", "" ], [ "Li", "Xiaodong", "" ], [ "Soltanolkotabi", "Mahdi", "" ] ]
We study the problem of recovering the phase from magnitude measurements; specifically, we wish to reconstruct a complex-valued signal x of C^n about which we have phaseless samples of the form y_r = |< a_r,x >|^2, r = 1,2,...,m (knowledge of the phase of these samples would yield a linear system). This paper develops a non-convex formulation of the phase retrieval problem as well as a concrete solution algorithm. In a nutshell, this algorithm starts with a careful initialization obtained by means of a spectral method, and then refines this initial estimate by iteratively applying novel update rules, which have low computational complexity, much like in a gradient descent scheme. The main contribution is that this algorithm is shown to rigorously allow the exact retrieval of phase information from a nearly minimal number of random measurements. Indeed, the sequence of successive iterates provably converges to the solution at a geometric rate so that the proposed scheme is efficient both in terms of computational and data resources. In theory, a variation on this scheme leads to a near-linear time algorithm for a physically realizable model based on coded diffraction patterns. We illustrate the effectiveness of our methods with various experiments on image data. Underlying our analysis are insights for the analysis of non-convex optimization schemes that may have implications for computational problems beyond phase retrieval.
1905.04000
Takanori Fujiwara
Takanori Fujiwara, Jia-Kai Chou, Shilpika, Panpan Xu, Liu Ren, Kwan-Liu Ma
An Incremental Dimensionality Reduction Method for Visualizing Streaming Multidimensional Data
This is the author's version of the article that has been published in IEEE Transactions on Visualization and Computer Graphics. The final version of this record is available at: 10.1109/TVCG.2019.2934433
null
10.1109/TVCG.2019.2934433
null
cs.GR cs.HC cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Dimensionality reduction (DR) methods are commonly used for analyzing and visualizing multidimensional data. However, when data is a live streaming feed, conventional DR methods cannot be directly used because of their computational complexity and inability to preserve the projected data positions at previous time points. In addition, the problem becomes even more challenging when the dynamic data records have a varying number of dimensions as often found in real-world applications. This paper presents an incremental DR solution. We enhance an existing incremental PCA method in several ways to ensure its usability for visualizing streaming multidimensional data. First, we use geometric transformation and animation methods to help preserve a viewer's mental map when visualizing the incremental results. Second, to handle data dimension variants, we use an optimization method to estimate the projected data positions, and also convey the resulting uncertainty in the visualization. We demonstrate the effectiveness of our design with two case studies using real-world datasets.
[ { "created": "Fri, 10 May 2019 08:15:42 GMT", "version": "v1" }, { "created": "Wed, 31 Jul 2019 05:38:20 GMT", "version": "v2" }, { "created": "Tue, 15 Oct 2019 04:16:00 GMT", "version": "v3" } ]
2019-10-16
[ [ "Fujiwara", "Takanori", "" ], [ "Chou", "Jia-Kai", "" ], [ "Shilpika", "", "" ], [ "Xu", "Panpan", "" ], [ "Ren", "Liu", "" ], [ "Ma", "Kwan-Liu", "" ] ]
Dimensionality reduction (DR) methods are commonly used for analyzing and visualizing multidimensional data. However, when data is a live streaming feed, conventional DR methods cannot be directly used because of their computational complexity and inability to preserve the projected data positions at previous time points. In addition, the problem becomes even more challenging when the dynamic data records have a varying number of dimensions as often found in real-world applications. This paper presents an incremental DR solution. We enhance an existing incremental PCA method in several ways to ensure its usability for visualizing streaming multidimensional data. First, we use geometric transformation and animation methods to help preserve a viewer's mental map when visualizing the incremental results. Second, to handle data dimension variants, we use an optimization method to estimate the projected data positions, and also convey the resulting uncertainty in the visualization. We demonstrate the effectiveness of our design with two case studies using real-world datasets.
2203.00972
Jacek Komorowski
Jacek Komorowski
Improving Point Cloud Based Place Recognition with Ranking-based Loss and Large Batch Training
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
The paper presents a simple and effective learning-based method for computing a discriminative 3D point cloud descriptor for place recognition purposes. Recent state-of-the-art methods have relatively complex architectures such as multi-scale oyramid of point Transformers combined with a pyramid of feature aggregation modules. Our method uses a simple and efficient 3D convolutional feature extraction, based on a sparse voxelized representation, enhanced with channel attention blocks. We employ recent advances in image retrieval and propose a modified version of a loss function based on a differentiable average precision approximation. Such loss function requires training with very large batches for the best results. This is enabled by using multistaged backpropagation. Experimental evaluation on the popular benchmarks proves the effectiveness of our approach, with a consistent improvement over the state of the art
[ { "created": "Wed, 2 Mar 2022 09:29:28 GMT", "version": "v1" }, { "created": "Thu, 7 Apr 2022 22:02:24 GMT", "version": "v2" } ]
2022-04-11
[ [ "Komorowski", "Jacek", "" ] ]
The paper presents a simple and effective learning-based method for computing a discriminative 3D point cloud descriptor for place recognition purposes. Recent state-of-the-art methods have relatively complex architectures such as multi-scale oyramid of point Transformers combined with a pyramid of feature aggregation modules. Our method uses a simple and efficient 3D convolutional feature extraction, based on a sparse voxelized representation, enhanced with channel attention blocks. We employ recent advances in image retrieval and propose a modified version of a loss function based on a differentiable average precision approximation. Such loss function requires training with very large batches for the best results. This is enabled by using multistaged backpropagation. Experimental evaluation on the popular benchmarks proves the effectiveness of our approach, with a consistent improvement over the state of the art
2208.09174
Travis Greene
Travis Greene, Amit Dhurandhar, Galit Shmueli
Atomist or Holist? A Diagnosis and Vision for More Productive Interdisciplinary AI Ethics Dialogue
9 pages, 1 figure, 2 tables. To be published in Patterns by Cell Press
null
null
null
cs.CY cs.AI stat.OT
http://creativecommons.org/licenses/by/4.0/
In response to growing recognition of the social impact of new AI-based technologies, major AI and ML conferences and journals now encourage or require papers to include ethics impact statements and undergo ethics reviews. This move has sparked heated debate concerning the role of ethics in AI research, at times devolving into name-calling and threats of "cancellation." We diagnose this conflict as one between atomist and holist ideologies. Among other things, atomists believe facts are and should be kept separate from values, while holists believe facts and values are and should be inextricable from one another. With the goal of reducing disciplinary polarization, we draw on numerous philosophical and historical sources to describe each ideology's core beliefs and assumptions. Finally, we call on atomists and holists within the ever-expanding data science community to exhibit greater empathy during ethical disagreements and propose four targeted strategies to ensure AI research benefits society.
[ { "created": "Fri, 19 Aug 2022 06:51:27 GMT", "version": "v1" }, { "created": "Thu, 1 Sep 2022 04:38:42 GMT", "version": "v2" }, { "created": "Sat, 12 Nov 2022 05:27:28 GMT", "version": "v3" } ]
2022-11-15
[ [ "Greene", "Travis", "" ], [ "Dhurandhar", "Amit", "" ], [ "Shmueli", "Galit", "" ] ]
In response to growing recognition of the social impact of new AI-based technologies, major AI and ML conferences and journals now encourage or require papers to include ethics impact statements and undergo ethics reviews. This move has sparked heated debate concerning the role of ethics in AI research, at times devolving into name-calling and threats of "cancellation." We diagnose this conflict as one between atomist and holist ideologies. Among other things, atomists believe facts are and should be kept separate from values, while holists believe facts and values are and should be inextricable from one another. With the goal of reducing disciplinary polarization, we draw on numerous philosophical and historical sources to describe each ideology's core beliefs and assumptions. Finally, we call on atomists and holists within the ever-expanding data science community to exhibit greater empathy during ethical disagreements and propose four targeted strategies to ensure AI research benefits society.
2202.13252
Richard Sutton
Richard S. Sutton
The Quest for a Common Model of the Intelligent Decision Maker
Will appear as an extended abstract at the fifth Multi-disciplinary Conference on Reinforcement Learning and Decision Making, held in Providence, Rhode Island, June 8-11, 2022
null
null
null
cs.AI
http://creativecommons.org/licenses/by-nc-nd/4.0/
The premise of the Multi-disciplinary Conference on Reinforcement Learning and Decision Making is that multiple disciplines share an interest in goal-directed decision making over time. The idea of this paper is to sharpen and deepen this premise by proposing a perspective on the decision maker that is substantive and widely held across psychology, artificial intelligence, economics, control theory, and neuroscience, which I call the "common model of the intelligent agent". The common model does not include anything specific to any organism, world, or application domain. The common model does include aspects of the decision maker's interaction with its world (there must be input and output, and a goal) and internal components of the decision maker (for perception, decision-making, internal evaluation, and a world model). I identify these aspects and components, note that they are given different names in different disciplines but refer essentially to the same ideas, and discuss the challenges and benefits of devising a neutral terminology that can be used across disciplines. It is time to recognize and build on the convergence of multiple diverse disciplines on a substantive common model of the intelligent agent.
[ { "created": "Sat, 26 Feb 2022 23:40:42 GMT", "version": "v1" }, { "created": "Fri, 8 Apr 2022 01:09:12 GMT", "version": "v2" }, { "created": "Sun, 5 Jun 2022 22:15:16 GMT", "version": "v3" } ]
2022-06-07
[ [ "Sutton", "Richard S.", "" ] ]
The premise of the Multi-disciplinary Conference on Reinforcement Learning and Decision Making is that multiple disciplines share an interest in goal-directed decision making over time. The idea of this paper is to sharpen and deepen this premise by proposing a perspective on the decision maker that is substantive and widely held across psychology, artificial intelligence, economics, control theory, and neuroscience, which I call the "common model of the intelligent agent". The common model does not include anything specific to any organism, world, or application domain. The common model does include aspects of the decision maker's interaction with its world (there must be input and output, and a goal) and internal components of the decision maker (for perception, decision-making, internal evaluation, and a world model). I identify these aspects and components, note that they are given different names in different disciplines but refer essentially to the same ideas, and discuss the challenges and benefits of devising a neutral terminology that can be used across disciplines. It is time to recognize and build on the convergence of multiple diverse disciplines on a substantive common model of the intelligent agent.