id
stringlengths
9
10
submitter
stringlengths
1
64
authors
stringlengths
4
20.7k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
404
doi
stringlengths
11
153
report-no
stringlengths
2
254
categories
stringlengths
5
98
license
stringclasses
9 values
orig_abstract
stringlengths
14
3.35k
versions
listlengths
1
60
update_date
stringlengths
10
10
authors_parsed
listlengths
1
1.35k
abstract
stringlengths
11
3.34k
1604.00693
Edmond Awad
Edmond Awad, Martin Caminada, Gabriella Pigozzi, Miko{\l}aj Podlaszewski and Iyad Rahwan
Pareto Optimality and Strategy Proofness in Group Argument Evaluation (Extended Version)
null
null
10.1093/logcom/exx017
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
An inconsistent knowledge base can be abstracted as a set of arguments and a defeat relation among them. There can be more than one consistent way to evaluate such an argumentation graph. Collective argument evaluation is the problem of aggregating the opinions of multiple agents on how a given set of arguments should be evaluated. It is crucial not only to ensure that the outcome is logically consistent, but also satisfies measures of social optimality and immunity to strategic manipulation. This is because agents have their individual preferences about what the outcome ought to be. In the current paper, we analyze three previously introduced argument-based aggregation operators with respect to Pareto optimality and strategy proofness under different general classes of agent preferences. We highlight fundamental trade-offs between strategic manipulability and social optimality on one hand, and classical logical criteria on the other. Our results motivate further investigation into the relationship between social choice and argumentation theory. The results are also relevant for choosing an appropriate aggregation operator given the criteria that are considered more important, as well as the nature of agents' preferences.
[ { "created": "Sun, 3 Apr 2016 21:48:37 GMT", "version": "v1" }, { "created": "Fri, 7 Apr 2017 20:02:55 GMT", "version": "v2" } ]
2017-06-20
[ [ "Awad", "Edmond", "" ], [ "Caminada", "Martin", "" ], [ "Pigozzi", "Gabriella", "" ], [ "Podlaszewski", "Mikołaj", "" ], [ "Rahwan", "Iyad", "" ] ]
An inconsistent knowledge base can be abstracted as a set of arguments and a defeat relation among them. There can be more than one consistent way to evaluate such an argumentation graph. Collective argument evaluation is the problem of aggregating the opinions of multiple agents on how a given set of arguments should be evaluated. It is crucial not only to ensure that the outcome is logically consistent, but also satisfies measures of social optimality and immunity to strategic manipulation. This is because agents have their individual preferences about what the outcome ought to be. In the current paper, we analyze three previously introduced argument-based aggregation operators with respect to Pareto optimality and strategy proofness under different general classes of agent preferences. We highlight fundamental trade-offs between strategic manipulability and social optimality on one hand, and classical logical criteria on the other. Our results motivate further investigation into the relationship between social choice and argumentation theory. The results are also relevant for choosing an appropriate aggregation operator given the criteria that are considered more important, as well as the nature of agents' preferences.
1901.04518
Markus Fr\"ohle
Markus Fr\"ohle, Karl Granstr\"om, Henk Wymeersch
Decentralized Poisson Multi-Bernoulli Filtering for Vehicle Tracking
14 pages, 5 figures
null
null
null
cs.MA cs.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A decentralized Poisson multi-Bernoulli filter is proposed to track multiple vehicles using multiple high-resolution sensors. Independent filters estimate the vehicles' presence, state, and shape using a Gaussian process extent model; a decentralized filter is realized through fusion of the filters posterior densities. An efficient implementation is achieved by parametric state representation, utilization of single hypothesis tracks, and fusion of vehicle information based on a fusion mapping. Numerical results demonstrate the performance.
[ { "created": "Mon, 14 Jan 2019 19:03:41 GMT", "version": "v1" }, { "created": "Thu, 5 Mar 2020 13:58:10 GMT", "version": "v2" } ]
2020-03-06
[ [ "Fröhle", "Markus", "" ], [ "Granström", "Karl", "" ], [ "Wymeersch", "Henk", "" ] ]
A decentralized Poisson multi-Bernoulli filter is proposed to track multiple vehicles using multiple high-resolution sensors. Independent filters estimate the vehicles' presence, state, and shape using a Gaussian process extent model; a decentralized filter is realized through fusion of the filters posterior densities. An efficient implementation is achieved by parametric state representation, utilization of single hypothesis tracks, and fusion of vehicle information based on a fusion mapping. Numerical results demonstrate the performance.
1704.05952
Raydonal Ospina
Luis Gomez, Raydonal Ospina and Alejandro C. Frery
Unassisted Quantitative Evaluation Of Despeckling Filters
Accepted for publication in Remote Sensing - Open Access Journal
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
SAR (Synthetic Aperture Radar) imaging plays a central role in Remote Sensing due to, among other important features, its ability to provide high-resolution, day-and-night and almost weather-independent images. SAR images are affected from a granular contamination, speckle, that can be described by a multiplicative model. Many despeckling techniques have been proposed in the literature, as well as measures of the quality of the results they provide. Assuming the multiplicative model, the observed image $Z$ is the product of two independent fields: the backscatter $X$ and the speckle $Y$. The result of any speckle filter is $\widehat X$, an estimator of the backscatter $X$, based solely on the observed data $Z$. An ideal estimator would be the one for which the ratio of the observed image to the filtered one $I=Z/\widehat X$ is only speckle: a collection of independent identically distributed samples from Gamma variates. We, then, assess the quality of a filter by the closeness of $I$ to the hypothesis that it is adherent to the statistical properties of pure speckle. We analyze filters through the ratio image they produce with regards to first- and second-order statistics: the former check marginal properties, while the latter verifies lack of structure. A new quantitative image-quality index is then defined, and applied to state-of-the-art despeckling filters. This new measure provides consistent results with commonly used quality measures (equivalent number of looks, PSNR, MSSIM, $\beta$ edge correlation, and preservation of the mean), and ranks the filters results also in agreement with their visual analysis. We conclude our study showing that the proposed measure can be successfully used to optimize the (often many) parameters that define a speckle filter.
[ { "created": "Wed, 19 Apr 2017 23:01:30 GMT", "version": "v1" } ]
2017-04-21
[ [ "Gomez", "Luis", "" ], [ "Ospina", "Raydonal", "" ], [ "Frery", "Alejandro C.", "" ] ]
SAR (Synthetic Aperture Radar) imaging plays a central role in Remote Sensing due to, among other important features, its ability to provide high-resolution, day-and-night and almost weather-independent images. SAR images are affected from a granular contamination, speckle, that can be described by a multiplicative model. Many despeckling techniques have been proposed in the literature, as well as measures of the quality of the results they provide. Assuming the multiplicative model, the observed image $Z$ is the product of two independent fields: the backscatter $X$ and the speckle $Y$. The result of any speckle filter is $\widehat X$, an estimator of the backscatter $X$, based solely on the observed data $Z$. An ideal estimator would be the one for which the ratio of the observed image to the filtered one $I=Z/\widehat X$ is only speckle: a collection of independent identically distributed samples from Gamma variates. We, then, assess the quality of a filter by the closeness of $I$ to the hypothesis that it is adherent to the statistical properties of pure speckle. We analyze filters through the ratio image they produce with regards to first- and second-order statistics: the former check marginal properties, while the latter verifies lack of structure. A new quantitative image-quality index is then defined, and applied to state-of-the-art despeckling filters. This new measure provides consistent results with commonly used quality measures (equivalent number of looks, PSNR, MSSIM, $\beta$ edge correlation, and preservation of the mean), and ranks the filters results also in agreement with their visual analysis. We conclude our study showing that the proposed measure can be successfully used to optimize the (often many) parameters that define a speckle filter.
1910.02059
Georgios Birmpas
Georgios Birmpas, Elias Koutsoupias, Philip Lazos, Francisco J. Marmolejo-Coss\'io
Fairness and Efficiency in DAG-based Cryptocurrencies
null
null
null
null
cs.CR cs.GT cs.MA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Bitcoin is a decentralised digital currency that serves as an alternative to existing transaction systems based on an external central authority for security. Although Bitcoin has many desirable properties, one of its fundamental shortcomings is its inability to process transactions at high rates. To address this challenge, many subsequent protocols either modify the rules of block acceptance (longest chain rule) and reward, or alter the graphical structure of the public ledger from a tree to a directed acyclic graph (DAG). Motivated by these approaches, we introduce a new general framework that captures ledger growth for a large class of DAG-based implementations. With this in hand, and by assuming honest miner behaviour, we (experimentally) explore how different DAG-based protocols perform in terms of fairness, i.e., if the block reward of a miner is proportional to their hash power, as well as efficiency, i.e. what proportion of user transactions a ledger deems valid after a certain length of time. Our results demonstrate fundamental structural limits on how well DAG-based ledger protocols cope with a high transaction load. More specifically, we show that even in a scenario where every miner on the system is honest in terms of when they publish blocks, what they point to, and what transactions each block contains, fairness and efficiency of the ledger can break down at specific hash rates if miners have differing levels of connectivity to the P2P network sustaining the protocol.
[ { "created": "Fri, 4 Oct 2019 17:35:46 GMT", "version": "v1" } ]
2019-10-07
[ [ "Birmpas", "Georgios", "" ], [ "Koutsoupias", "Elias", "" ], [ "Lazos", "Philip", "" ], [ "Marmolejo-Cossío", "Francisco J.", "" ] ]
Bitcoin is a decentralised digital currency that serves as an alternative to existing transaction systems based on an external central authority for security. Although Bitcoin has many desirable properties, one of its fundamental shortcomings is its inability to process transactions at high rates. To address this challenge, many subsequent protocols either modify the rules of block acceptance (longest chain rule) and reward, or alter the graphical structure of the public ledger from a tree to a directed acyclic graph (DAG). Motivated by these approaches, we introduce a new general framework that captures ledger growth for a large class of DAG-based implementations. With this in hand, and by assuming honest miner behaviour, we (experimentally) explore how different DAG-based protocols perform in terms of fairness, i.e., if the block reward of a miner is proportional to their hash power, as well as efficiency, i.e. what proportion of user transactions a ledger deems valid after a certain length of time. Our results demonstrate fundamental structural limits on how well DAG-based ledger protocols cope with a high transaction load. More specifically, we show that even in a scenario where every miner on the system is honest in terms of when they publish blocks, what they point to, and what transactions each block contains, fairness and efficiency of the ledger can break down at specific hash rates if miners have differing levels of connectivity to the P2P network sustaining the protocol.
2108.13591
Jingfei Chang
Jingfei Chang, Yang Lu, Ping Xue, Yiqun Xu and Zhen Wei
AIP: Adversarial Iterative Pruning Based on Knowledge Transfer for Convolutional Neural Networks
15 pages, 7 figures
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the increase of structure complexity, convolutional neural networks (CNNs) take a fair amount of computation cost. Meanwhile, existing research reveals the salient parameter redundancy in CNNs. The current pruning methods can compress CNNs with little performance drop, but when the pruning ratio increases, the accuracy loss is more serious. Moreover, some iterative pruning methods are difficult to accurately identify and delete unimportant parameters due to the accuracy drop during pruning. We propose a novel adversarial iterative pruning method (AIP) for CNNs based on knowledge transfer. The original network is regarded as the teacher while the compressed network is the student. We apply attention maps and output features to transfer information from the teacher to the student. Then, a shallow fully-connected network is designed as the discriminator to allow the output of two networks to play an adversarial game, thereby it can quickly recover the pruned accuracy among pruning intervals. Finally, an iterative pruning scheme based on the importance of channels is proposed. We conduct extensive experiments on the image classification tasks CIFAR-10, CIFAR-100, and ILSVRC-2012 to verify our pruning method can achieve efficient compression for CNNs even without accuracy loss. On the ILSVRC-2012, when removing 36.78% parameters and 45.55% floating-point operations (FLOPs) of ResNet-18, the Top-1 accuracy drop are only 0.66%. Our method is superior to some state-of-the-art pruning schemes in terms of compressing rate and accuracy. Moreover, we further demonstrate that AIP has good generalization on the object detection task PASCAL VOC.
[ { "created": "Tue, 31 Aug 2021 02:38:36 GMT", "version": "v1" } ]
2021-09-01
[ [ "Chang", "Jingfei", "" ], [ "Lu", "Yang", "" ], [ "Xue", "Ping", "" ], [ "Xu", "Yiqun", "" ], [ "Wei", "Zhen", "" ] ]
With the increase of structure complexity, convolutional neural networks (CNNs) take a fair amount of computation cost. Meanwhile, existing research reveals the salient parameter redundancy in CNNs. The current pruning methods can compress CNNs with little performance drop, but when the pruning ratio increases, the accuracy loss is more serious. Moreover, some iterative pruning methods are difficult to accurately identify and delete unimportant parameters due to the accuracy drop during pruning. We propose a novel adversarial iterative pruning method (AIP) for CNNs based on knowledge transfer. The original network is regarded as the teacher while the compressed network is the student. We apply attention maps and output features to transfer information from the teacher to the student. Then, a shallow fully-connected network is designed as the discriminator to allow the output of two networks to play an adversarial game, thereby it can quickly recover the pruned accuracy among pruning intervals. Finally, an iterative pruning scheme based on the importance of channels is proposed. We conduct extensive experiments on the image classification tasks CIFAR-10, CIFAR-100, and ILSVRC-2012 to verify our pruning method can achieve efficient compression for CNNs even without accuracy loss. On the ILSVRC-2012, when removing 36.78% parameters and 45.55% floating-point operations (FLOPs) of ResNet-18, the Top-1 accuracy drop are only 0.66%. Our method is superior to some state-of-the-art pruning schemes in terms of compressing rate and accuracy. Moreover, we further demonstrate that AIP has good generalization on the object detection task PASCAL VOC.
1510.06788
Ravi Chugh
Ravi Chugh
Prodirect Manipulation: Bidirectional Programming for the Masses
ICSE 2016 Companion Proceedings (Visions of 2025 Track), May 14-22, 2016, Austin, TX, USA
null
10.1145/2889160.2889210
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Software interfaces today generally fall at either end of a spectrum. On one end are programmable systems, which allow expert users (i.e. programmers) to write software artifacts that describe complex abstractions, but programs are disconnected from their eventual output. On the other end are domain-specific graphical user interfaces (GUIs), which allow end users (i.e. non-programmers) to easily create varied content but present insurmountable walls when a desired feature is not built-in. Both programmatic and direct manipulation have distinct strengths, but users must typically choose one over the other or use some ad-hoc combination of systems. Our goal, put simply, is to bridge this divide. We envision novel software systems that tightly couple programmatic and direct manipulation --- a combination we dub prodirect manipulation --- for a variety of use cases. This will require advances in a broad range of software engineering disciplines, from program analysis and program synthesis technology to user interface design and evaluation. In this extended abstract, we propose two general strategies --- real-time program synthesis and domain-specific synthesis of general-purpose programs --- that may prove fruitful for overcoming the technical challenges. We also discuss metrics that will be important in evaluating the usability and utility of prodirect manipulation systems.
[ { "created": "Thu, 22 Oct 2015 23:44:36 GMT", "version": "v1" }, { "created": "Wed, 24 Feb 2016 15:28:44 GMT", "version": "v2" } ]
2016-02-25
[ [ "Chugh", "Ravi", "" ] ]
Software interfaces today generally fall at either end of a spectrum. On one end are programmable systems, which allow expert users (i.e. programmers) to write software artifacts that describe complex abstractions, but programs are disconnected from their eventual output. On the other end are domain-specific graphical user interfaces (GUIs), which allow end users (i.e. non-programmers) to easily create varied content but present insurmountable walls when a desired feature is not built-in. Both programmatic and direct manipulation have distinct strengths, but users must typically choose one over the other or use some ad-hoc combination of systems. Our goal, put simply, is to bridge this divide. We envision novel software systems that tightly couple programmatic and direct manipulation --- a combination we dub prodirect manipulation --- for a variety of use cases. This will require advances in a broad range of software engineering disciplines, from program analysis and program synthesis technology to user interface design and evaluation. In this extended abstract, we propose two general strategies --- real-time program synthesis and domain-specific synthesis of general-purpose programs --- that may prove fruitful for overcoming the technical challenges. We also discuss metrics that will be important in evaluating the usability and utility of prodirect manipulation systems.
1509.03807
Nguyen H. Nam
Le Xuan Quang, Le Huy Hoang, Vu Dinh Chuan, Nguyen Hoai Nam, Nguyen Thi Tu Anh and Vu Thi Hong Nhung
Integrated Science, Technology, Engineering and Mathematics (STEM) Education through Active Experience of Designing Technical Toys in Vietnamese Schools
12 pages, 7 figures, 2 tables, British Journal of Education, Society & Behavioural Science, 2015
null
10.9734/BJESBS/2015/19429
null
cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
STEM has attracted great consideration. The purpose of research is: (1) study STEM education, (2) explore STEM education with the creative and experiential activity, (3) suggest applying STEM education by designing technical toys for the middle school student. This study used a qualitative approach to carry out teaching integration for STEM education. The study applied to teaching the technological field in Vietnamese middle schools. The design performed at the Faculty of Technology Education, Hanoi National University of Education, Vietnam in April 2015. This study used the integrated approach to design subjects for STEM education. Two procedures for integration undertook with analysis. A sample of producing technical toy was consistent with developing students competencies. Integrated approach to STEM education through designing technical toys is possible. Recently, there has been a booming interest in Integrated Science, Technology, Engineering and Mathematics (STEM) education, but the approaches to STEM still remains controversial in diverse educational contexts. This study addressed this issue by exploring STEM education with the use of creative and experiential activities in a Vietnamese educational context. It also proposed a practical model for integrating STEM into teaching technology in secondary schools by designing technical toys. The implementation of the practical model suggests the possibility in using the integrated approach to STEM education through designing technical toys for middle school students in Vietnam. By applying the subject knowledge domains to solve real world problems and settings, the students can experience the benefits of a concrete and active learning in a meaningful and practical context. The multidisciplinary and interdisciplinary integration approaches are consistent with the development of the students competencies.
[ { "created": "Sun, 13 Sep 2015 04:22:03 GMT", "version": "v1" } ]
2015-09-15
[ [ "Quang", "Le Xuan", "" ], [ "Hoang", "Le Huy", "" ], [ "Chuan", "Vu Dinh", "" ], [ "Nam", "Nguyen Hoai", "" ], [ "Anh", "Nguyen Thi Tu", "" ], [ "Nhung", "Vu Thi Hong", "" ] ]
STEM has attracted great consideration. The purpose of research is: (1) study STEM education, (2) explore STEM education with the creative and experiential activity, (3) suggest applying STEM education by designing technical toys for the middle school student. This study used a qualitative approach to carry out teaching integration for STEM education. The study applied to teaching the technological field in Vietnamese middle schools. The design performed at the Faculty of Technology Education, Hanoi National University of Education, Vietnam in April 2015. This study used the integrated approach to design subjects for STEM education. Two procedures for integration undertook with analysis. A sample of producing technical toy was consistent with developing students competencies. Integrated approach to STEM education through designing technical toys is possible. Recently, there has been a booming interest in Integrated Science, Technology, Engineering and Mathematics (STEM) education, but the approaches to STEM still remains controversial in diverse educational contexts. This study addressed this issue by exploring STEM education with the use of creative and experiential activities in a Vietnamese educational context. It also proposed a practical model for integrating STEM into teaching technology in secondary schools by designing technical toys. The implementation of the practical model suggests the possibility in using the integrated approach to STEM education through designing technical toys for middle school students in Vietnam. By applying the subject knowledge domains to solve real world problems and settings, the students can experience the benefits of a concrete and active learning in a meaningful and practical context. The multidisciplinary and interdisciplinary integration approaches are consistent with the development of the students competencies.
1902.11184
Samia El Haddouti
Samia El Haddouti and Mohamed Dafir Ech-Cherif El Kettani
Towards an Interoperable Identity Management Framework: a Comparative Study
null
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The development of services and the growing demand for resources sharing among users from different organizations with some level of affinity have motivated the creation of Identity Management Systems. Identity Management has gained significant attention in recent years in the form of several projects producing many standards, prototypes and application models both in the academia and the industry. However, the interoperability between different Identity Management Solutions is still a complex challenge yet to achieve. The user can only use one Identity Provider within a single Service Provider session, when in many scenarios the user needs to provide attributes from multiple Identity Providers. This paper presents the state of the art of our researches and it focuses on two main topics: first, to provide a detailed study about the Identity Management and the integrated disciplines and technologies in general; secondly, to summarize the main approaches that have been proposed to overcome the interoperability challenge.
[ { "created": "Thu, 28 Feb 2019 16:06:14 GMT", "version": "v1" } ]
2019-03-01
[ [ "Haddouti", "Samia El", "" ], [ "Kettani", "Mohamed Dafir Ech-Cherif El", "" ] ]
The development of services and the growing demand for resources sharing among users from different organizations with some level of affinity have motivated the creation of Identity Management Systems. Identity Management has gained significant attention in recent years in the form of several projects producing many standards, prototypes and application models both in the academia and the industry. However, the interoperability between different Identity Management Solutions is still a complex challenge yet to achieve. The user can only use one Identity Provider within a single Service Provider session, when in many scenarios the user needs to provide attributes from multiple Identity Providers. This paper presents the state of the art of our researches and it focuses on two main topics: first, to provide a detailed study about the Identity Management and the integrated disciplines and technologies in general; secondly, to summarize the main approaches that have been proposed to overcome the interoperability challenge.
2302.12190
Ciprian-Octavian Truic\u{a}
Ciprian-Octavian Truic\u{a} and Elena-Simona Apostol and Radu-C\u{a}t\u{a}lin Nicolescu and Panagiotis Karras
MCWDST: a Minimum-Cost Weighted Directed Spanning Tree Algorithm for Real-Time Fake News Mitigation in Social Media
null
IEEE Access, 11:125861-125873, 2023
10.1109/ACCESS.2023.3331220
null
cs.SI cs.AI cs.CL cs.NE
http://creativecommons.org/licenses/by-nc-nd/4.0/
The widespread availability of internet access and handheld devices confers to social media a power similar to the one newspapers used to have. People seek affordable information on social media and can reach it within seconds. Yet this convenience comes with dangers; any user may freely post whatever they please and the content can stay online for a long period, regardless of its truthfulness. A need to detect untruthful information, also known as fake news, arises. In this paper, we present an end-to-end solution that accurately detects fake news and immunizes network nodes that spread them in real-time. To detect fake news, we propose two new stack deep learning architectures that utilize convolutional and bidirectional LSTM layers. To mitigate the spread of fake news, we propose a real-time network-aware strategy that (1) constructs a minimum-cost weighted directed spanning tree for a detected node, and (2) immunizes nodes in that tree by scoring their harmfulness using a novel ranking function. We demonstrate the effectiveness of our solution on five real-world datasets.
[ { "created": "Thu, 23 Feb 2023 17:31:40 GMT", "version": "v1" }, { "created": "Fri, 19 Jan 2024 16:30:14 GMT", "version": "v2" } ]
2024-01-22
[ [ "Truică", "Ciprian-Octavian", "" ], [ "Apostol", "Elena-Simona", "" ], [ "Nicolescu", "Radu-Cătălin", "" ], [ "Karras", "Panagiotis", "" ] ]
The widespread availability of internet access and handheld devices confers to social media a power similar to the one newspapers used to have. People seek affordable information on social media and can reach it within seconds. Yet this convenience comes with dangers; any user may freely post whatever they please and the content can stay online for a long period, regardless of its truthfulness. A need to detect untruthful information, also known as fake news, arises. In this paper, we present an end-to-end solution that accurately detects fake news and immunizes network nodes that spread them in real-time. To detect fake news, we propose two new stack deep learning architectures that utilize convolutional and bidirectional LSTM layers. To mitigate the spread of fake news, we propose a real-time network-aware strategy that (1) constructs a minimum-cost weighted directed spanning tree for a detected node, and (2) immunizes nodes in that tree by scoring their harmfulness using a novel ranking function. We demonstrate the effectiveness of our solution on five real-world datasets.
1503.02997
No'am Newman
No'am Newman
Spreadsheets in an ERP environment: not what the doctor ordered
In Proceedings of the 2nd Workshop on Software Engineering Methods in Spreadsheets (http://spreadsheetlab.org/sems15/)
null
null
null
cs.SE
http://creativecommons.org/licenses/by/3.0/
Modern ERP systems contain flexible report generators but the tendency exists for users to export data to spreadsheets for manipulation, reporting and decision making. A purported reason for this is that some users are more familiar with personal reporting tools (spreadsheets) as opposed to enterprise reporting tools. The author's doctoral research intends to measure the extent of spreadsheet usage in ERP environments and to determine which factors facilitate this.
[ { "created": "Tue, 10 Mar 2015 17:31:47 GMT", "version": "v1" } ]
2015-03-11
[ [ "Newman", "No'am", "" ] ]
Modern ERP systems contain flexible report generators but the tendency exists for users to export data to spreadsheets for manipulation, reporting and decision making. A purported reason for this is that some users are more familiar with personal reporting tools (spreadsheets) as opposed to enterprise reporting tools. The author's doctoral research intends to measure the extent of spreadsheet usage in ERP environments and to determine which factors facilitate this.
2406.04995
Julian Minder
Julian Minder, Laurence Brandenberger, Luis Salamanca, Frank Schweitzer
Data2Neo - A Tool for Complex Neo4j Data Integration
null
null
null
null
cs.DB
http://creativecommons.org/licenses/by/4.0/
This paper introduces Data2Neo, an open-source Python library for converting relational data into knowledge graphs stored in Neo4j databases. With extensive customization options and support for continuous online data integration from various data sources, Data2Neo is designed to be user-friendly, efficient, and scalable to large datasets. The tool significantly lowers the barrier to entry for creating and using knowledge graphs, making this increasingly popular form of data representation accessible to a wider audience. The code is available at https://github.com/jkminder/data2neo .
[ { "created": "Fri, 7 Jun 2024 15:06:36 GMT", "version": "v1" }, { "created": "Mon, 10 Jun 2024 08:28:04 GMT", "version": "v2" }, { "created": "Tue, 11 Jun 2024 11:12:49 GMT", "version": "v3" }, { "created": "Wed, 12 Jun 2024 12:16:11 GMT", "version": "v4" } ]
2024-06-13
[ [ "Minder", "Julian", "" ], [ "Brandenberger", "Laurence", "" ], [ "Salamanca", "Luis", "" ], [ "Schweitzer", "Frank", "" ] ]
This paper introduces Data2Neo, an open-source Python library for converting relational data into knowledge graphs stored in Neo4j databases. With extensive customization options and support for continuous online data integration from various data sources, Data2Neo is designed to be user-friendly, efficient, and scalable to large datasets. The tool significantly lowers the barrier to entry for creating and using knowledge graphs, making this increasingly popular form of data representation accessible to a wider audience. The code is available at https://github.com/jkminder/data2neo .
2008.07720
Hung Pham Thuc
Pham Thuc Hung, Kenji Yamanishi
Word2vec Skip-gram Dimensionality Selection via Sequential Normalized Maximum Likelihood
null
null
null
null
cs.LG cs.CL stat.ML
http://creativecommons.org/licenses/by/4.0/
In this paper, we propose a novel information criteria-based approach to select the dimensionality of the word2vec Skip-gram (SG). From the perspective of the probability theory, SG is considered as an implicit probability distribution estimation under the assumption that there exists a true contextual distribution among words. Therefore, we apply information criteria with the aim of selecting the best dimensionality so that the corresponding model can be as close as possible to the true distribution. We examine the following information criteria for the dimensionality selection problem: the Akaike Information Criterion, Bayesian Information Criterion, and Sequential Normalized Maximum Likelihood (SNML) criterion. SNML is the total codelength required for the sequential encoding of a data sequence on the basis of the minimum description length. The proposed approach is applied to both the original SG model and the SG Negative Sampling model to clarify the idea of using information criteria. Additionally, as the original SNML suffers from computational disadvantages, we introduce novel heuristics for its efficient computation. Moreover, we empirically demonstrate that SNML outperforms both BIC and AIC. In comparison with other evaluation methods for word embedding, the dimensionality selected by SNML is significantly closer to the optimal dimensionality obtained by word analogy or word similarity tasks.
[ { "created": "Tue, 18 Aug 2020 03:24:21 GMT", "version": "v1" }, { "created": "Mon, 24 Aug 2020 04:55:56 GMT", "version": "v2" }, { "created": "Tue, 25 Aug 2020 01:08:24 GMT", "version": "v3" } ]
2020-08-26
[ [ "Hung", "Pham Thuc", "" ], [ "Yamanishi", "Kenji", "" ] ]
In this paper, we propose a novel information criteria-based approach to select the dimensionality of the word2vec Skip-gram (SG). From the perspective of the probability theory, SG is considered as an implicit probability distribution estimation under the assumption that there exists a true contextual distribution among words. Therefore, we apply information criteria with the aim of selecting the best dimensionality so that the corresponding model can be as close as possible to the true distribution. We examine the following information criteria for the dimensionality selection problem: the Akaike Information Criterion, Bayesian Information Criterion, and Sequential Normalized Maximum Likelihood (SNML) criterion. SNML is the total codelength required for the sequential encoding of a data sequence on the basis of the minimum description length. The proposed approach is applied to both the original SG model and the SG Negative Sampling model to clarify the idea of using information criteria. Additionally, as the original SNML suffers from computational disadvantages, we introduce novel heuristics for its efficient computation. Moreover, we empirically demonstrate that SNML outperforms both BIC and AIC. In comparison with other evaluation methods for word embedding, the dimensionality selected by SNML is significantly closer to the optimal dimensionality obtained by word analogy or word similarity tasks.
2206.10071
Kay Liu
Kay Liu, Yingtong Dou, Yue Zhao, Xueying Ding, Xiyang Hu, Ruitong Zhang, Kaize Ding, Canyu Chen, Hao Peng, Kai Shu, Lichao Sun, Jundong Li, George H. Chen, Zhihao Jia, Philip S. Yu
BOND: Benchmarking Unsupervised Outlier Node Detection on Static Attributed Graphs
NeurIPS 2022. Benchmark available at https://github.com/pygod-team/pygod/tree/main/benchmark
null
null
null
cs.LG cs.SI
http://creativecommons.org/licenses/by/4.0/
Detecting which nodes in graphs are outliers is a relatively new machine learning task with numerous applications. Despite the proliferation of algorithms developed in recent years for this task, there has been no standard comprehensive setting for performance evaluation. Consequently, it has been difficult to understand which methods work well and when under a broad range of settings. To bridge this gap, we present--to the best of our knowledge--the first comprehensive benchmark for unsupervised outlier node detection on static attributed graphs called BOND, with the following highlights. (1) We benchmark the outlier detection performance of 14 methods ranging from classical matrix factorization to the latest graph neural networks. (2) Using nine real datasets, our benchmark assesses how the different detection methods respond to two major types of synthetic outliers and separately to "organic" (real non-synthetic) outliers. (3) Using an existing random graph generation technique, we produce a family of synthetically generated datasets of different graph sizes that enable us to compare the running time and memory usage of the different outlier detection algorithms. Based on our experimental results, we discuss the pros and cons of existing graph outlier detection algorithms, and we highlight opportunities for future research. Importantly, our code is freely available and meant to be easily extendable: https://github.com/pygod-team/pygod/tree/main/benchmark
[ { "created": "Tue, 21 Jun 2022 01:46:38 GMT", "version": "v1" }, { "created": "Sun, 16 Oct 2022 01:18:45 GMT", "version": "v2" } ]
2022-10-18
[ [ "Liu", "Kay", "" ], [ "Dou", "Yingtong", "" ], [ "Zhao", "Yue", "" ], [ "Ding", "Xueying", "" ], [ "Hu", "Xiyang", "" ], [ "Zhang", "Ruitong", "" ], [ "Ding", "Kaize", "" ], [ "Chen", "Canyu", "" ], [ "Peng", "Hao", "" ], [ "Shu", "Kai", "" ], [ "Sun", "Lichao", "" ], [ "Li", "Jundong", "" ], [ "Chen", "George H.", "" ], [ "Jia", "Zhihao", "" ], [ "Yu", "Philip S.", "" ] ]
Detecting which nodes in graphs are outliers is a relatively new machine learning task with numerous applications. Despite the proliferation of algorithms developed in recent years for this task, there has been no standard comprehensive setting for performance evaluation. Consequently, it has been difficult to understand which methods work well and when under a broad range of settings. To bridge this gap, we present--to the best of our knowledge--the first comprehensive benchmark for unsupervised outlier node detection on static attributed graphs called BOND, with the following highlights. (1) We benchmark the outlier detection performance of 14 methods ranging from classical matrix factorization to the latest graph neural networks. (2) Using nine real datasets, our benchmark assesses how the different detection methods respond to two major types of synthetic outliers and separately to "organic" (real non-synthetic) outliers. (3) Using an existing random graph generation technique, we produce a family of synthetically generated datasets of different graph sizes that enable us to compare the running time and memory usage of the different outlier detection algorithms. Based on our experimental results, we discuss the pros and cons of existing graph outlier detection algorithms, and we highlight opportunities for future research. Importantly, our code is freely available and meant to be easily extendable: https://github.com/pygod-team/pygod/tree/main/benchmark
2302.10096
Christian Anti\'c
Christian Anti\'c
Similarity
null
null
null
null
cs.AI cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Detecting and exploiting similarities between seemingly distant objects is without doubt an important human ability. This paper develops \textit{from the ground up} an abstract algebraic and qualitative justification-based notion of similarity based on the observation that sets of generalizations encode important properties of elements. We show that similarity defined in this way has appealing mathematical properties. As we construct our notion of similarity from first principles using only elementary concepts of universal algebra, to convince the reader of its plausibility, we show that it can be naturally embedded into first-order logic via model-theoretic types.
[ { "created": "Mon, 13 Feb 2023 14:48:59 GMT", "version": "v1" }, { "created": "Mon, 27 Feb 2023 20:55:04 GMT", "version": "v2" }, { "created": "Wed, 11 Oct 2023 14:49:59 GMT", "version": "v3" }, { "created": "Tue, 12 Dec 2023 15:08:49 GMT", "version": "v4" }, { "created": "Sun, 28 Jan 2024 00:56:10 GMT", "version": "v5" }, { "created": "Wed, 3 Apr 2024 09:35:53 GMT", "version": "v6" } ]
2024-04-04
[ [ "Antić", "Christian", "" ] ]
Detecting and exploiting similarities between seemingly distant objects is without doubt an important human ability. This paper develops \textit{from the ground up} an abstract algebraic and qualitative justification-based notion of similarity based on the observation that sets of generalizations encode important properties of elements. We show that similarity defined in this way has appealing mathematical properties. As we construct our notion of similarity from first principles using only elementary concepts of universal algebra, to convince the reader of its plausibility, we show that it can be naturally embedded into first-order logic via model-theoretic types.
2002.02460
Ezequiel Alvarez
Ezequiel Alvarez (ICAS), Federico Lamagna (CAB), Cesar Miquel (Easytech) and Manuel Szewc (ICAS)
Intelligent Arxiv: Sort daily papers by learning users topics preference
We are open to new ideas and to scientists and institutions wishing to collaborate and/or partner in further improvements for this service. With this tool the time a paper is sent is irrelevant for its order of appearance
null
null
ICAS 047/20
cs.LG astro-ph.HE gr-qc hep-ph hep-th stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Current daily paper releases are becoming increasingly large and areas of research are growing in diversity. This makes it harder for scientists to keep up to date with current state of the art and identify relevant work within their lines of interest. The goal of this article is to address this problem using Machine Learning techniques. We model a scientific paper to be built as a combination of different scientific knowledge from diverse topics into a new problem. In light of this, we implement the unsupervised Machine Learning technique of Latent Dirichlet Allocation (LDA) on the corpus of papers in a given field to: i) define and extract underlying topics in the corpus; ii) get the topics weight vector for each paper in the corpus; and iii) get the topics weight vector for new papers. By registering papers preferred by a user, we build a user vector of weights using the information of the vectors of the selected papers. Hence, by performing an inner product between the user vector and each paper in the daily Arxiv release, we can sort the papers according to the user preference on the underlying topics. We have created the website IArxiv.org where users can read sorted daily Arxiv releases (and more) while the algorithm learns each users preference, yielding a more accurate sorting every day. Current IArxiv.org version runs on Arxiv categories astro-ph, gr-qc, hep-ph and hep-th and we plan to extend to others. We propose several new useful and relevant implementations to be additionally developed as well as new Machine Learning techniques beyond LDA to further improve the accuracy of this new tool.
[ { "created": "Thu, 6 Feb 2020 19:00:02 GMT", "version": "v1" } ]
2020-02-10
[ [ "Alvarez", "Ezequiel", "", "ICAS" ], [ "Lamagna", "Federico", "", "CAB" ], [ "Miquel", "Cesar", "", "Easytech" ], [ "Szewc", "Manuel", "", "ICAS" ] ]
Current daily paper releases are becoming increasingly large and areas of research are growing in diversity. This makes it harder for scientists to keep up to date with current state of the art and identify relevant work within their lines of interest. The goal of this article is to address this problem using Machine Learning techniques. We model a scientific paper to be built as a combination of different scientific knowledge from diverse topics into a new problem. In light of this, we implement the unsupervised Machine Learning technique of Latent Dirichlet Allocation (LDA) on the corpus of papers in a given field to: i) define and extract underlying topics in the corpus; ii) get the topics weight vector for each paper in the corpus; and iii) get the topics weight vector for new papers. By registering papers preferred by a user, we build a user vector of weights using the information of the vectors of the selected papers. Hence, by performing an inner product between the user vector and each paper in the daily Arxiv release, we can sort the papers according to the user preference on the underlying topics. We have created the website IArxiv.org where users can read sorted daily Arxiv releases (and more) while the algorithm learns each users preference, yielding a more accurate sorting every day. Current IArxiv.org version runs on Arxiv categories astro-ph, gr-qc, hep-ph and hep-th and we plan to extend to others. We propose several new useful and relevant implementations to be additionally developed as well as new Machine Learning techniques beyond LDA to further improve the accuracy of this new tool.
1501.06678
Zhiwen Zeng
Zhiwen Zeng, Xiangke Wang, Zhiqiang Zheng
Edge Agreement of Multi-agent System with Quantized Measurements via the Directed Edge Laplacian
16 pages, 10 figures; Round2, revised to IET Control Theory & Applications, 2016
null
null
null
cs.SY cs.MA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This work explores the edge agreement problem of second-order nonlinear multi-agent system under quantized measurements. Under the edge agreement framework, we introduce an important concept about the \emph{essential edge Laplacian} and also obtain a reduced model of the edge agreement dynamics based on the spanning tree subgraph. The quantized edge agreement problem of second-order nonlinear multi-agent system is studied, in which both uniform and logarithmic quantizers are considered. We do not only guarantee the stability of the proposed quantized control law, but also reveal the explicit mathematical connection of the quantized interval and the convergence properties for both uniform and logarithmic quantizers, which has not been addressed before. Particularly, for uniform quantizers, we provide the upper bound of the radius of the agreement neighborhood and indicate that the radius increases with the quantization interval. While for logarithmic quantizers, the agents converge exponentially to the desired agreement equilibrium. In addition, we figure out the relationship of the quantization interval and the convergence speed and also provide the estimates of the convergence rate. Finally, simulation results are given to verify the theoretical analysis.
[ { "created": "Tue, 27 Jan 2015 08:00:21 GMT", "version": "v1" }, { "created": "Fri, 29 Jan 2016 01:54:34 GMT", "version": "v2" } ]
2016-02-01
[ [ "Zeng", "Zhiwen", "" ], [ "Wang", "Xiangke", "" ], [ "Zheng", "Zhiqiang", "" ] ]
This work explores the edge agreement problem of second-order nonlinear multi-agent system under quantized measurements. Under the edge agreement framework, we introduce an important concept about the \emph{essential edge Laplacian} and also obtain a reduced model of the edge agreement dynamics based on the spanning tree subgraph. The quantized edge agreement problem of second-order nonlinear multi-agent system is studied, in which both uniform and logarithmic quantizers are considered. We do not only guarantee the stability of the proposed quantized control law, but also reveal the explicit mathematical connection of the quantized interval and the convergence properties for both uniform and logarithmic quantizers, which has not been addressed before. Particularly, for uniform quantizers, we provide the upper bound of the radius of the agreement neighborhood and indicate that the radius increases with the quantization interval. While for logarithmic quantizers, the agents converge exponentially to the desired agreement equilibrium. In addition, we figure out the relationship of the quantization interval and the convergence speed and also provide the estimates of the convergence rate. Finally, simulation results are given to verify the theoretical analysis.
2301.00014
Tommaso Barbariol
Tommaso Barbariol, Davide Masiero, Enrico Feltresi, Gian Antonio Susto
Time series Forecasting to detect anomalous behaviours in Multiphase Flow Meters
null
null
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by/4.0/
An Anomaly Detection (AD) System for Self-diagnosis has been developed for Multiphase Flow Meter (MPFM). The system relies on machine learning algorithms for time series forecasting, historical data have been used to train a model and to predict the behavior of a sensor and, thus, to detect anomalies.
[ { "created": "Fri, 30 Dec 2022 14:41:53 GMT", "version": "v1" } ]
2023-01-03
[ [ "Barbariol", "Tommaso", "" ], [ "Masiero", "Davide", "" ], [ "Feltresi", "Enrico", "" ], [ "Susto", "Gian Antonio", "" ] ]
An Anomaly Detection (AD) System for Self-diagnosis has been developed for Multiphase Flow Meter (MPFM). The system relies on machine learning algorithms for time series forecasting, historical data have been used to train a model and to predict the behavior of a sensor and, thus, to detect anomalies.
2307.11049
Abhishek Gupta
Marcel Torne, Max Balsells, Zihan Wang, Samedh Desai, Tao Chen, Pulkit Agrawal, Abhishek Gupta
Breadcrumbs to the Goal: Goal-Conditioned Exploration from Human-in-the-Loop Feedback
null
null
null
null
cs.LG cs.AI cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Exploration and reward specification are fundamental and intertwined challenges for reinforcement learning. Solving sequential decision-making tasks requiring expansive exploration requires either careful design of reward functions or the use of novelty-seeking exploration bonuses. Human supervisors can provide effective guidance in the loop to direct the exploration process, but prior methods to leverage this guidance require constant synchronous high-quality human feedback, which is expensive and impractical to obtain. In this work, we present a technique called Human Guided Exploration (HuGE), which uses low-quality feedback from non-expert users that may be sporadic, asynchronous, and noisy. HuGE guides exploration for reinforcement learning not only in simulation but also in the real world, all without meticulous reward specification. The key concept involves bifurcating human feedback and policy learning: human feedback steers exploration, while self-supervised learning from the exploration data yields unbiased policies. This procedure can leverage noisy, asynchronous human feedback to learn policies with no hand-crafted reward design or exploration bonuses. HuGE is able to learn a variety of challenging multi-stage robotic navigation and manipulation tasks in simulation using crowdsourced feedback from non-expert users. Moreover, this paradigm can be scaled to learning directly on real-world robots, using occasional, asynchronous feedback from human supervisors.
[ { "created": "Thu, 20 Jul 2023 17:30:37 GMT", "version": "v1" } ]
2023-07-21
[ [ "Torne", "Marcel", "" ], [ "Balsells", "Max", "" ], [ "Wang", "Zihan", "" ], [ "Desai", "Samedh", "" ], [ "Chen", "Tao", "" ], [ "Agrawal", "Pulkit", "" ], [ "Gupta", "Abhishek", "" ] ]
Exploration and reward specification are fundamental and intertwined challenges for reinforcement learning. Solving sequential decision-making tasks requiring expansive exploration requires either careful design of reward functions or the use of novelty-seeking exploration bonuses. Human supervisors can provide effective guidance in the loop to direct the exploration process, but prior methods to leverage this guidance require constant synchronous high-quality human feedback, which is expensive and impractical to obtain. In this work, we present a technique called Human Guided Exploration (HuGE), which uses low-quality feedback from non-expert users that may be sporadic, asynchronous, and noisy. HuGE guides exploration for reinforcement learning not only in simulation but also in the real world, all without meticulous reward specification. The key concept involves bifurcating human feedback and policy learning: human feedback steers exploration, while self-supervised learning from the exploration data yields unbiased policies. This procedure can leverage noisy, asynchronous human feedback to learn policies with no hand-crafted reward design or exploration bonuses. HuGE is able to learn a variety of challenging multi-stage robotic navigation and manipulation tasks in simulation using crowdsourced feedback from non-expert users. Moreover, this paradigm can be scaled to learning directly on real-world robots, using occasional, asynchronous feedback from human supervisors.
2406.12311
Dongwon Jo
Dongwon Jo, Taesu Kim, Yulhwa Kim, Jae-Joon Kim
Mixture of Scales: Memory-Efficient Token-Adaptive Binarization for Large Language Models
null
null
null
null
cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
Binarization, which converts weight parameters to binary values, has emerged as an effective strategy to reduce the size of large language models (LLMs). However, typical binarization techniques significantly diminish linguistic effectiveness of LLMs. To address this issue, we introduce a novel binarization technique called Mixture of Scales (BinaryMoS). Unlike conventional methods, BinaryMoS employs multiple scaling experts for binary weights, dynamically merging these experts for each token to adaptively generate scaling factors. This token-adaptive approach boosts the representational power of binarized LLMs by enabling contextual adjustments to the values of binary weights. Moreover, because this adaptive process only involves the scaling factors rather than the entire weight matrix, BinaryMoS maintains compression efficiency similar to traditional static binarization methods. Our experimental results reveal that BinaryMoS surpasses conventional binarization techniques in various natural language processing tasks and even outperforms 2-bit quantization methods, all while maintaining similar model size to static binarization techniques.
[ { "created": "Tue, 18 Jun 2024 06:32:23 GMT", "version": "v1" } ]
2024-06-19
[ [ "Jo", "Dongwon", "" ], [ "Kim", "Taesu", "" ], [ "Kim", "Yulhwa", "" ], [ "Kim", "Jae-Joon", "" ] ]
Binarization, which converts weight parameters to binary values, has emerged as an effective strategy to reduce the size of large language models (LLMs). However, typical binarization techniques significantly diminish linguistic effectiveness of LLMs. To address this issue, we introduce a novel binarization technique called Mixture of Scales (BinaryMoS). Unlike conventional methods, BinaryMoS employs multiple scaling experts for binary weights, dynamically merging these experts for each token to adaptively generate scaling factors. This token-adaptive approach boosts the representational power of binarized LLMs by enabling contextual adjustments to the values of binary weights. Moreover, because this adaptive process only involves the scaling factors rather than the entire weight matrix, BinaryMoS maintains compression efficiency similar to traditional static binarization methods. Our experimental results reveal that BinaryMoS surpasses conventional binarization techniques in various natural language processing tasks and even outperforms 2-bit quantization methods, all while maintaining similar model size to static binarization techniques.
2402.04470
Zhicheng Lin
Zhicheng Lin
Large language models as probes into latent psychology
8 pages, 1 table
null
null
null
cs.CY
http://creativecommons.org/licenses/by-sa/4.0/
Advances in AI invite the misuse of language models as stand-ins for human minds or participants, which fundamentally mischaracterizes these statistical algorithms. We argue that language models should be embraced as flexible simulation tools, able to mimic a wide range of behaviors, perspectives, and psychological attributes evident in human language data, but the models themselves should not be equated to or anthropomorphized as human minds.
[ { "created": "Tue, 6 Feb 2024 23:28:23 GMT", "version": "v1" }, { "created": "Tue, 27 Feb 2024 03:21:04 GMT", "version": "v2" } ]
2024-02-28
[ [ "Lin", "Zhicheng", "" ] ]
Advances in AI invite the misuse of language models as stand-ins for human minds or participants, which fundamentally mischaracterizes these statistical algorithms. We argue that language models should be embraced as flexible simulation tools, able to mimic a wide range of behaviors, perspectives, and psychological attributes evident in human language data, but the models themselves should not be equated to or anthropomorphized as human minds.
2310.12680
Puneesh Deora
Puneesh Deora, Rouzbeh Ghaderi, Hossein Taheri, Christos Thrampoulidis
On the Optimization and Generalization of Multi-head Attention
48 page; presented in the Workshop on High-dimensional Learning Dynamics, ICML 2023
null
null
null
cs.LG math.OC stat.ML
http://creativecommons.org/licenses/by/4.0/
The training and generalization dynamics of the Transformer's core mechanism, namely the Attention mechanism, remain under-explored. Besides, existing analyses primarily focus on single-head attention. Inspired by the demonstrated benefits of overparameterization when training fully-connected networks, we investigate the potential optimization and generalization advantages of using multiple attention heads. Towards this goal, we derive convergence and generalization guarantees for gradient-descent training of a single-layer multi-head self-attention model, under a suitable realizability condition on the data. We then establish primitive conditions on the initialization that ensure realizability holds. Finally, we demonstrate that these conditions are satisfied for a simple tokenized-mixture model. We expect the analysis can be extended to various data-model and architecture variations.
[ { "created": "Thu, 19 Oct 2023 12:18:24 GMT", "version": "v1" } ]
2023-10-20
[ [ "Deora", "Puneesh", "" ], [ "Ghaderi", "Rouzbeh", "" ], [ "Taheri", "Hossein", "" ], [ "Thrampoulidis", "Christos", "" ] ]
The training and generalization dynamics of the Transformer's core mechanism, namely the Attention mechanism, remain under-explored. Besides, existing analyses primarily focus on single-head attention. Inspired by the demonstrated benefits of overparameterization when training fully-connected networks, we investigate the potential optimization and generalization advantages of using multiple attention heads. Towards this goal, we derive convergence and generalization guarantees for gradient-descent training of a single-layer multi-head self-attention model, under a suitable realizability condition on the data. We then establish primitive conditions on the initialization that ensure realizability holds. Finally, we demonstrate that these conditions are satisfied for a simple tokenized-mixture model. We expect the analysis can be extended to various data-model and architecture variations.
2104.05237
Zifan Shi
Hao Ouyang, Zifan Shi, Chenyang Lei, Ka Lung Law and Qifeng Chen
Neural Camera Simulators
Accepted to CVPR2021
null
null
null
cs.CV eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a controllable camera simulator based on deep neural networks to synthesize raw image data under different camera settings, including exposure time, ISO, and aperture. The proposed simulator includes an exposure module that utilizes the principle of modern lens designs for correcting the luminance level. It also contains a noise module using the noise level function and an aperture module with adaptive attention to simulate the side effects on noise and defocus blur. To facilitate the learning of a simulator model, we collect a dataset of the 10,000 raw images of 450 scenes with different exposure settings. Quantitative experiments and qualitative comparisons show that our approach outperforms relevant baselines in raw data synthesize on multiple cameras. Furthermore, the camera simulator enables various applications, including large-aperture enhancement, HDR, auto exposure, and data augmentation for training local feature detectors. Our work represents the first attempt to simulate a camera sensor's behavior leveraging both the advantage of traditional raw sensor features and the power of data-driven deep learning.
[ { "created": "Mon, 12 Apr 2021 07:06:27 GMT", "version": "v1" }, { "created": "Mon, 9 Aug 2021 09:42:52 GMT", "version": "v2" } ]
2021-08-10
[ [ "Ouyang", "Hao", "" ], [ "Shi", "Zifan", "" ], [ "Lei", "Chenyang", "" ], [ "Law", "Ka Lung", "" ], [ "Chen", "Qifeng", "" ] ]
We present a controllable camera simulator based on deep neural networks to synthesize raw image data under different camera settings, including exposure time, ISO, and aperture. The proposed simulator includes an exposure module that utilizes the principle of modern lens designs for correcting the luminance level. It also contains a noise module using the noise level function and an aperture module with adaptive attention to simulate the side effects on noise and defocus blur. To facilitate the learning of a simulator model, we collect a dataset of the 10,000 raw images of 450 scenes with different exposure settings. Quantitative experiments and qualitative comparisons show that our approach outperforms relevant baselines in raw data synthesize on multiple cameras. Furthermore, the camera simulator enables various applications, including large-aperture enhancement, HDR, auto exposure, and data augmentation for training local feature detectors. Our work represents the first attempt to simulate a camera sensor's behavior leveraging both the advantage of traditional raw sensor features and the power of data-driven deep learning.
1410.0245
Jeremy Kun
Benjamin Fish and Jeremy Kun and \'Ad\'am D\'aniel Lelkes and Lev Reyzin and Gy\"orgy Tur\'an
On the Computational Complexity of MapReduce
null
null
null
null
cs.CC cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we study MapReduce computations from a complexity-theoretic perspective. First, we formulate a uniform version of the MRC model of Karloff et al. (2010). We then show that the class of regular languages, and moreover all of sublogarithmic space, lies in constant round MRC. This result also applies to the MPC model of Andoni et al. (2014). In addition, we prove that, conditioned on a variant of the Exponential Time Hypothesis, there are strict hierarchies within MRC so that increasing the number of rounds or the amount of time per processor increases the power of MRC. To the best of our knowledge we are the first to approach the MapReduce model with complexity-theoretic techniques, and our work lays the foundation for further analysis relating MapReduce to established complexity classes.
[ { "created": "Wed, 1 Oct 2014 14:44:01 GMT", "version": "v1" }, { "created": "Tue, 6 Oct 2015 18:43:00 GMT", "version": "v2" } ]
2015-10-07
[ [ "Fish", "Benjamin", "" ], [ "Kun", "Jeremy", "" ], [ "Lelkes", "Ádám Dániel", "" ], [ "Reyzin", "Lev", "" ], [ "Turán", "György", "" ] ]
In this paper we study MapReduce computations from a complexity-theoretic perspective. First, we formulate a uniform version of the MRC model of Karloff et al. (2010). We then show that the class of regular languages, and moreover all of sublogarithmic space, lies in constant round MRC. This result also applies to the MPC model of Andoni et al. (2014). In addition, we prove that, conditioned on a variant of the Exponential Time Hypothesis, there are strict hierarchies within MRC so that increasing the number of rounds or the amount of time per processor increases the power of MRC. To the best of our knowledge we are the first to approach the MapReduce model with complexity-theoretic techniques, and our work lays the foundation for further analysis relating MapReduce to established complexity classes.
2404.13861
Jessica Dai
Jessica Dai
Beyond Personhood: Agency, Accountability, and the Limits of Anthropomorphic Ethical Analysis
null
null
null
null
cs.CY
http://creativecommons.org/licenses/by/4.0/
What is agency, and why does it matter? In this work, we draw from the political science and philosophy literature and give two competing visions of what it means to be an (ethical) agent. The first view, which we term mechanistic, is commonly--and implicitly--assumed in AI research, yet it is a fundamentally limited means to understand the ethical characteristics of AI. Under the second view, which we term volitional, AI can no longer be considered an ethical agent. We discuss the implications of each of these views for two critical questions: first, what the ideal system ought to look like, and second, how accountability may be achieved. In light of this discussion, we ultimately argue that, in the context of ethically-significant behavior, AI should be viewed not as an agent but as the outcome of political processes.
[ { "created": "Mon, 22 Apr 2024 04:19:24 GMT", "version": "v1" } ]
2024-04-23
[ [ "Dai", "Jessica", "" ] ]
What is agency, and why does it matter? In this work, we draw from the political science and philosophy literature and give two competing visions of what it means to be an (ethical) agent. The first view, which we term mechanistic, is commonly--and implicitly--assumed in AI research, yet it is a fundamentally limited means to understand the ethical characteristics of AI. Under the second view, which we term volitional, AI can no longer be considered an ethical agent. We discuss the implications of each of these views for two critical questions: first, what the ideal system ought to look like, and second, how accountability may be achieved. In light of this discussion, we ultimately argue that, in the context of ethically-significant behavior, AI should be viewed not as an agent but as the outcome of political processes.
1301.7482
Austin Jones M.S.
Austin Jones and Mac Schwager and Calin Belta
Technical Report: A Receding Horizon Algorithm for Informative Path Planning with Temporal Logic Constraints
Extended version of paper accepted to 2013 IEEE International Conference on Robotics and Automation (ICRA)
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This technical report is an extended version of the paper 'A Receding Horizon Algorithm for Informative Path Planning with Temporal Logic Constraints' accepted to the 2013 IEEE International Conference on Robotics and Automation (ICRA). This paper considers the problem of finding the most informative path for a sensing robot under temporal logic constraints, a richer set of constraints than have previously been considered in information gathering. An algorithm for informative path planning is presented that leverages tools from information theory and formal control synthesis, and is proven to give a path that satisfies the given temporal logic constraints. The algorithm uses a receding horizon approach in order to provide a reactive, on-line solution while mitigating computational complexity. Statistics compiled from multiple simulation studies indicate that this algorithm performs better than a baseline exhaustive search approach.
[ { "created": "Thu, 31 Jan 2013 00:33:24 GMT", "version": "v1" } ]
2013-02-01
[ [ "Jones", "Austin", "" ], [ "Schwager", "Mac", "" ], [ "Belta", "Calin", "" ] ]
This technical report is an extended version of the paper 'A Receding Horizon Algorithm for Informative Path Planning with Temporal Logic Constraints' accepted to the 2013 IEEE International Conference on Robotics and Automation (ICRA). This paper considers the problem of finding the most informative path for a sensing robot under temporal logic constraints, a richer set of constraints than have previously been considered in information gathering. An algorithm for informative path planning is presented that leverages tools from information theory and formal control synthesis, and is proven to give a path that satisfies the given temporal logic constraints. The algorithm uses a receding horizon approach in order to provide a reactive, on-line solution while mitigating computational complexity. Statistics compiled from multiple simulation studies indicate that this algorithm performs better than a baseline exhaustive search approach.
1712.03297
Ke Chen
Ke Chen and Adrian Dumitrescu
On the Longest Spanning Tree with Neighborhoods
12 pages, 4 figures. Section 2 is split into three subsections; more technical details are provided in section 3
null
null
null
cs.CG math.MG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study a maximization problem for geometric network design. Given a set of $n$ compact neighborhoods in $\mathbb{R}^d$, select a point in each neighborhood, so that the longest spanning tree on these points (as vertices) has maximum length. Here we give an approximation algorithm with ratio $0.511$, which represents the first, albeit small, improvement beyond $1/2$. While we suspect that the problem is NP-hard already in the plane, this issue remains open.
[ { "created": "Fri, 8 Dec 2017 22:24:22 GMT", "version": "v1" }, { "created": "Wed, 29 Apr 2020 02:38:40 GMT", "version": "v2" } ]
2020-04-30
[ [ "Chen", "Ke", "" ], [ "Dumitrescu", "Adrian", "" ] ]
We study a maximization problem for geometric network design. Given a set of $n$ compact neighborhoods in $\mathbb{R}^d$, select a point in each neighborhood, so that the longest spanning tree on these points (as vertices) has maximum length. Here we give an approximation algorithm with ratio $0.511$, which represents the first, albeit small, improvement beyond $1/2$. While we suspect that the problem is NP-hard already in the plane, this issue remains open.
2112.15253
Sergey A. Slavnov
Sergey Slavnov
First order linear logic and tensor type calculus for categorial grammars
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
We study relationship between first order multiplicative linear logic (MLL1), which has been known to provide representations to different categorial grammars, and the recently introduced extended tensor type calculus (ETTC). We identify a fragment of MLL1, which seems sufficient for many grammar representations, and establish a correspondence between ETTC and this fragment. The system ETTC, thus, can be seen as an alternative syntax and intrinsic deductive system together with a geometric representation for the latter. We also give a natural deduction formulation of ETTC, which might be convenient.
[ { "created": "Fri, 31 Dec 2021 00:35:48 GMT", "version": "v1" } ]
2022-01-03
[ [ "Slavnov", "Sergey", "" ] ]
We study relationship between first order multiplicative linear logic (MLL1), which has been known to provide representations to different categorial grammars, and the recently introduced extended tensor type calculus (ETTC). We identify a fragment of MLL1, which seems sufficient for many grammar representations, and establish a correspondence between ETTC and this fragment. The system ETTC, thus, can be seen as an alternative syntax and intrinsic deductive system together with a geometric representation for the latter. We also give a natural deduction formulation of ETTC, which might be convenient.
2110.10570
Samuel Bell
Samuel J. Bell and Neil D. Lawrence
Behavioral Experiments for Understanding Catastrophic Forgetting
null
Presented at the AI Evaluation Beyond Metrics (EBeM) Workshop at IJCAI, Vienna 2022
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we explore whether the fundamental tool of experimental psychology, the behavioral experiment, has the power to generate insight not only into humans and animals, but artificial systems too. We apply the techniques of experimental psychology to investigating catastrophic forgetting in neural networks. We present a series of controlled experiments with two-layer ReLU networks, and exploratory results revealing a new understanding of the behavior of catastrophic forgetting. Alongside our empirical findings, we demonstrate an alternative, behavior-first approach to investigating neural network phenomena.
[ { "created": "Wed, 20 Oct 2021 14:00:02 GMT", "version": "v1" }, { "created": "Fri, 22 Oct 2021 11:22:11 GMT", "version": "v2" }, { "created": "Tue, 13 Dec 2022 15:32:46 GMT", "version": "v3" } ]
2022-12-14
[ [ "Bell", "Samuel J.", "" ], [ "Lawrence", "Neil D.", "" ] ]
In this paper we explore whether the fundamental tool of experimental psychology, the behavioral experiment, has the power to generate insight not only into humans and animals, but artificial systems too. We apply the techniques of experimental psychology to investigating catastrophic forgetting in neural networks. We present a series of controlled experiments with two-layer ReLU networks, and exploratory results revealing a new understanding of the behavior of catastrophic forgetting. Alongside our empirical findings, we demonstrate an alternative, behavior-first approach to investigating neural network phenomena.
2402.12252
James Davis
William P. Maxam III and James C. Davis
An Interview Study on Third-Party Cyber Threat Hunting Processes in the U.S. Department of Homeland Security
Technical report accompanying a paper at USENIX Security 2024
null
null
null
cs.CR cs.SE
http://creativecommons.org/licenses/by/4.0/
Cybersecurity is a major challenge for large organizations. Traditional cybersecurity defense is reactive. Cybersecurity operations centers keep out adversaries and incident response teams clean up after break-ins. Recently a proactive stage has been introduced: Cyber Threat Hunting (TH) looks for potential compromises missed by other cyber defenses. TH is mandated for federal executive agencies and government contractors. As threat hunting is a new cybersecurity discipline, most TH teams operate without a defined process. The practices and challenges of TH have not yet been documented. To address this gap, this paper describes the first interview study of threat hunt practitioners. We obtained access and interviewed 11 threat hunters associated with the U.S. government's Department of Homeland Security. Hour-long interviews were conducted. We analyzed the transcripts with process and thematic coding.We describe the diversity among their processes, show that their processes differ from the TH processes reported in the literature, and unify our subjects' descriptions into a single TH process.We enumerate common TH challenges and solutions according to the subjects. The two most common challenges were difficulty in assessing a Threat Hunter's expertise, and developing and maintaining automation. We conclude with recommendations for TH teams (improve planning, focus on automation, and apprentice new members) and highlight directions for future work (finding a TH process that balances flexibility and formalism, and identifying assessments for TH team performance).
[ { "created": "Mon, 19 Feb 2024 16:08:36 GMT", "version": "v1" } ]
2024-02-20
[ [ "Maxam", "William P.", "III" ], [ "Davis", "James C.", "" ] ]
Cybersecurity is a major challenge for large organizations. Traditional cybersecurity defense is reactive. Cybersecurity operations centers keep out adversaries and incident response teams clean up after break-ins. Recently a proactive stage has been introduced: Cyber Threat Hunting (TH) looks for potential compromises missed by other cyber defenses. TH is mandated for federal executive agencies and government contractors. As threat hunting is a new cybersecurity discipline, most TH teams operate without a defined process. The practices and challenges of TH have not yet been documented. To address this gap, this paper describes the first interview study of threat hunt practitioners. We obtained access and interviewed 11 threat hunters associated with the U.S. government's Department of Homeland Security. Hour-long interviews were conducted. We analyzed the transcripts with process and thematic coding.We describe the diversity among their processes, show that their processes differ from the TH processes reported in the literature, and unify our subjects' descriptions into a single TH process.We enumerate common TH challenges and solutions according to the subjects. The two most common challenges were difficulty in assessing a Threat Hunter's expertise, and developing and maintaining automation. We conclude with recommendations for TH teams (improve planning, focus on automation, and apprentice new members) and highlight directions for future work (finding a TH process that balances flexibility and formalism, and identifying assessments for TH team performance).
2110.11712
Simon Meierhans
Rasmus Kyng, Simon Meierhans, Maximilian Probst Gutenberg
Incremental SSSP for Sparse Digraphs Beyond the Hopset Barrier
Accepted at SODA'22
null
null
null
cs.DS
http://creativecommons.org/licenses/by/4.0/
Given a directed, weighted graph $G=(V,E)$ undergoing edge insertions, the incremental single-source shortest paths (SSSP) problem asks for the maintenance of approximate distances from a dedicated source $s$ while optimizing the total time required to process the insertion sequence of $m$ edges. Recently, Gutenberg, Williams and Wein [STOC'20] introduced a deterministic $\tilde{O}(n^2)$ algorithm for this problem, achieving near linear time for very dense graphs. For sparse graphs, Chechik and Zhang [SODA'21] recently presented a deterministic $\tilde{O}(m^{5/3})$ algorithm, and an adaptive randomized algorithm with run-time $\tilde{O}(m\sqrt{n} + m^{7/5})$. This algorithm is remarkable for two reasons: 1) in very spare graphs it reaches the directed hopset barrier of $\tilde{\Omega}(n^{3/2})$ that applied to all previous approaches for partially-dynamic SSSP [STOC'14, SODA'20, FOCS'20] \emph{and} 2) it does not resort to a directed hopset technique itself. In this article we introduce \emph{propagation synchronization}, a new technique for controlling the error build-up on paths throughout batches of insertions. This leads us to a significant improvement of the approach in [SODA'21] yielding a \emph{deterministic} $\tilde{O}(m^{3/2})$ algorithm for the problem. By a very careful combination of our new technique with the sampling approach from [SODA'21], we further obtain an adaptive randomized algorithm with total update time $\tilde{O}(m^{4/3})$. This is the first partially-dynamic SSSP algorithm in sparse graphs to bypass the notorious directed hopset barrier which is often seen as the fundamental challenge towards achieving truly near-linear time algorithms.
[ { "created": "Fri, 22 Oct 2021 11:19:14 GMT", "version": "v1" } ]
2021-10-25
[ [ "Kyng", "Rasmus", "" ], [ "Meierhans", "Simon", "" ], [ "Gutenberg", "Maximilian Probst", "" ] ]
Given a directed, weighted graph $G=(V,E)$ undergoing edge insertions, the incremental single-source shortest paths (SSSP) problem asks for the maintenance of approximate distances from a dedicated source $s$ while optimizing the total time required to process the insertion sequence of $m$ edges. Recently, Gutenberg, Williams and Wein [STOC'20] introduced a deterministic $\tilde{O}(n^2)$ algorithm for this problem, achieving near linear time for very dense graphs. For sparse graphs, Chechik and Zhang [SODA'21] recently presented a deterministic $\tilde{O}(m^{5/3})$ algorithm, and an adaptive randomized algorithm with run-time $\tilde{O}(m\sqrt{n} + m^{7/5})$. This algorithm is remarkable for two reasons: 1) in very spare graphs it reaches the directed hopset barrier of $\tilde{\Omega}(n^{3/2})$ that applied to all previous approaches for partially-dynamic SSSP [STOC'14, SODA'20, FOCS'20] \emph{and} 2) it does not resort to a directed hopset technique itself. In this article we introduce \emph{propagation synchronization}, a new technique for controlling the error build-up on paths throughout batches of insertions. This leads us to a significant improvement of the approach in [SODA'21] yielding a \emph{deterministic} $\tilde{O}(m^{3/2})$ algorithm for the problem. By a very careful combination of our new technique with the sampling approach from [SODA'21], we further obtain an adaptive randomized algorithm with total update time $\tilde{O}(m^{4/3})$. This is the first partially-dynamic SSSP algorithm in sparse graphs to bypass the notorious directed hopset barrier which is often seen as the fundamental challenge towards achieving truly near-linear time algorithms.
2010.04683
Jovita Lukasik
Jovita Lukasik and David Friede and Arber Zela and Frank Hutter and Margret Keuper
Smooth Variational Graph Embeddings for Efficient Neural Architecture Search
8 pages, 3 figures, 5 tables. Camera-Ready Version for IJCNN 2021
null
null
null
cs.LG cs.AI cs.CV stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Neural architecture search (NAS) has recently been addressed from various directions, including discrete, sampling-based methods and efficient differentiable approaches. While the former are notoriously expensive, the latter suffer from imposing strong constraints on the search space. Architecture optimization from a learned embedding space for example through graph neural network based variational autoencoders builds a middle ground and leverages advantages from both sides. Such approaches have recently shown good performance on several benchmarks. Yet, their stability and predictive power heavily depends on their capacity to reconstruct networks from the embedding space. In this paper, we propose a two-sided variational graph autoencoder, which allows to smoothly encode and accurately reconstruct neural architectures from various search spaces. We evaluate the proposed approach on neural architectures defined by the ENAS approach, the NAS-Bench-101 and the NAS-Bench-201 search space and show that our smooth embedding space allows to directly extrapolate the performance prediction to architectures outside the seen domain (e.g. with more operations). Thus, it facilitates to predict good network architectures even without expensive Bayesian optimization or reinforcement learning.
[ { "created": "Fri, 9 Oct 2020 17:05:41 GMT", "version": "v1" }, { "created": "Tue, 8 Dec 2020 14:50:56 GMT", "version": "v2" }, { "created": "Wed, 12 May 2021 12:44:54 GMT", "version": "v3" } ]
2021-05-13
[ [ "Lukasik", "Jovita", "" ], [ "Friede", "David", "" ], [ "Zela", "Arber", "" ], [ "Hutter", "Frank", "" ], [ "Keuper", "Margret", "" ] ]
Neural architecture search (NAS) has recently been addressed from various directions, including discrete, sampling-based methods and efficient differentiable approaches. While the former are notoriously expensive, the latter suffer from imposing strong constraints on the search space. Architecture optimization from a learned embedding space for example through graph neural network based variational autoencoders builds a middle ground and leverages advantages from both sides. Such approaches have recently shown good performance on several benchmarks. Yet, their stability and predictive power heavily depends on their capacity to reconstruct networks from the embedding space. In this paper, we propose a two-sided variational graph autoencoder, which allows to smoothly encode and accurately reconstruct neural architectures from various search spaces. We evaluate the proposed approach on neural architectures defined by the ENAS approach, the NAS-Bench-101 and the NAS-Bench-201 search space and show that our smooth embedding space allows to directly extrapolate the performance prediction to architectures outside the seen domain (e.g. with more operations). Thus, it facilitates to predict good network architectures even without expensive Bayesian optimization or reinforcement learning.
1505.00887
Chu Luo
Jiyou Li, Chu Luo, Zeying Xu
The Minimal and Maximal Sensitivity of the Simplified Weighted Sum Function
6 pages
null
null
null
cs.DM
http://creativecommons.org/licenses/by-nc-sa/4.0/
Sensitivity is an important complexity measure of Boolean functions. In this paper we present properties of the minimal and maximal sensitivity of the simplified weighted sum function. A simple close formula of the minimal sensitivity of the simplified weighted sum function is obtained. A phenomenon is exhibited that the minimal sensitivity of the weighted sum function is indeed an indicator of large primes, that is, for large prime number p, the minimal sensitivity of the weighted sum function is always equal to one.
[ { "created": "Tue, 5 May 2015 06:16:29 GMT", "version": "v1" }, { "created": "Wed, 27 Jan 2016 16:51:55 GMT", "version": "v2" } ]
2016-01-28
[ [ "Li", "Jiyou", "" ], [ "Luo", "Chu", "" ], [ "Xu", "Zeying", "" ] ]
Sensitivity is an important complexity measure of Boolean functions. In this paper we present properties of the minimal and maximal sensitivity of the simplified weighted sum function. A simple close formula of the minimal sensitivity of the simplified weighted sum function is obtained. A phenomenon is exhibited that the minimal sensitivity of the weighted sum function is indeed an indicator of large primes, that is, for large prime number p, the minimal sensitivity of the weighted sum function is always equal to one.
1908.05557
Anh Truong
Anh Truong, Austin Walters, Jeremy Goodsitt, Keegan Hines, C. Bayan Bruss, Reza Farivar
Towards Automated Machine Learning: Evaluation and Comparison of AutoML Approaches and Tools
null
null
10.1109/ICTAI.2019.00209
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
There has been considerable growth and interest in industrial applications of machine learning (ML) in recent years. ML engineers, as a consequence, are in high demand across the industry, yet improving the efficiency of ML engineers remains a fundamental challenge. Automated machine learning (AutoML) has emerged as a way to save time and effort on repetitive tasks in ML pipelines, such as data pre-processing, feature engineering, model selection, hyperparameter optimization, and prediction result analysis. In this paper, we investigate the current state of AutoML tools aiming to automate these tasks. We conduct various evaluations of the tools on many datasets, in different data segments, to examine their performance, and compare their advantages and disadvantages on different test cases.
[ { "created": "Thu, 15 Aug 2019 14:16:09 GMT", "version": "v1" }, { "created": "Tue, 3 Sep 2019 19:31:52 GMT", "version": "v2" } ]
2020-05-05
[ [ "Truong", "Anh", "" ], [ "Walters", "Austin", "" ], [ "Goodsitt", "Jeremy", "" ], [ "Hines", "Keegan", "" ], [ "Bruss", "C. Bayan", "" ], [ "Farivar", "Reza", "" ] ]
There has been considerable growth and interest in industrial applications of machine learning (ML) in recent years. ML engineers, as a consequence, are in high demand across the industry, yet improving the efficiency of ML engineers remains a fundamental challenge. Automated machine learning (AutoML) has emerged as a way to save time and effort on repetitive tasks in ML pipelines, such as data pre-processing, feature engineering, model selection, hyperparameter optimization, and prediction result analysis. In this paper, we investigate the current state of AutoML tools aiming to automate these tasks. We conduct various evaluations of the tools on many datasets, in different data segments, to examine their performance, and compare their advantages and disadvantages on different test cases.
2206.13176
Yifan Hou
Yifan Hou, Hongzhi Chen, Changji Li, James Cheng, Ming-Chang Yang
A Representation Learning Framework for Property Graphs
This paper is published in KDD 2019. Code can be found here: https://github.com/yifan-h/PGE
null
null
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Representation learning on graphs, also called graph embedding, has demonstrated its significant impact on a series of machine learning applications such as classification, prediction and recommendation. However, existing work has largely ignored the rich information contained in the properties (or attributes) of both nodes and edges of graphs in modern applications, e.g., those represented by property graphs. To date, most existing graph embedding methods either focus on plain graphs with only the graph topology, or consider properties on nodes only. We propose PGE, a graph representation learning framework that incorporates both node and edge properties into the graph embedding procedure. PGE uses node clustering to assign biases to differentiate neighbors of a node and leverages multiple data-driven matrices to aggregate the property information of neighbors sampled based on a biased strategy. PGE adopts the popular inductive model for neighborhood aggregation. We provide detailed analyses on the efficacy of our method and validate the performance of PGE by showing how PGE achieves better embedding results than the state-of-the-art graph embedding methods on benchmark applications such as node classification and link prediction over real-world datasets.
[ { "created": "Mon, 27 Jun 2022 10:36:57 GMT", "version": "v1" } ]
2022-06-28
[ [ "Hou", "Yifan", "" ], [ "Chen", "Hongzhi", "" ], [ "Li", "Changji", "" ], [ "Cheng", "James", "" ], [ "Yang", "Ming-Chang", "" ] ]
Representation learning on graphs, also called graph embedding, has demonstrated its significant impact on a series of machine learning applications such as classification, prediction and recommendation. However, existing work has largely ignored the rich information contained in the properties (or attributes) of both nodes and edges of graphs in modern applications, e.g., those represented by property graphs. To date, most existing graph embedding methods either focus on plain graphs with only the graph topology, or consider properties on nodes only. We propose PGE, a graph representation learning framework that incorporates both node and edge properties into the graph embedding procedure. PGE uses node clustering to assign biases to differentiate neighbors of a node and leverages multiple data-driven matrices to aggregate the property information of neighbors sampled based on a biased strategy. PGE adopts the popular inductive model for neighborhood aggregation. We provide detailed analyses on the efficacy of our method and validate the performance of PGE by showing how PGE achieves better embedding results than the state-of-the-art graph embedding methods on benchmark applications such as node classification and link prediction over real-world datasets.
1801.08737
Khoa Nguyen
San Ling and Khoa Nguyen and Huaxiong Wang and Yanhong Xu
Lattice-Based Group Signatures: Achieving Full Dynamicity (and Deniability) with Ease
null
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work, we provide the first lattice-based group signature that offers full dynamicity (i.e., users have the flexibility in joining and leaving the group), and thus, resolve a prominent open problem posed by previous works. Moreover, we achieve this non-trivial feat in a relatively simple manner. Starting with Libert et al.'s fully static construction (Eurocrypt 2016) - which is arguably the most efficient lattice-based group signature to date, we introduce simple-but-insightful tweaks that allow to upgrade it directly into the fully dynamic setting. More startlingly, our scheme even produces slightly shorter signatures than the former, thanks to an adaptation of a technique proposed by Ling et al. (PKC 2013), allowing to prove inequalities in zero-knowledge. Our design approach consists of upgrading Libert et al.'s static construction (EUROCRYPT 2016) - which is arguably the most efficient lattice-based group signature to date - into the fully dynamic setting. Somewhat surprisingly, our scheme produces slightly shorter signatures than the former, thanks to a new technique for proving inequality in zero-knowledge without relying on any inequality check. The scheme satisfies the strong security requirements of Bootle et al.'s model (ACNS 2016), under the Short Integer Solution (SIS) and the Learning With Errors (LWE) assumptions. Furthermore, we demonstrate how to equip the obtained group signature scheme with the deniability functionality in a simple way. This attractive functionality, put forward by Ishida et al. (CANS 2016), enables the tracing authority to provide an evidence that a given user is not the owner of a signature in question. In the process, we design a zero-knowledge protocol for proving that a given LWE ciphertext does not decrypt to a particular message.
[ { "created": "Fri, 26 Jan 2018 10:09:03 GMT", "version": "v1" } ]
2018-01-29
[ [ "Ling", "San", "" ], [ "Nguyen", "Khoa", "" ], [ "Wang", "Huaxiong", "" ], [ "Xu", "Yanhong", "" ] ]
In this work, we provide the first lattice-based group signature that offers full dynamicity (i.e., users have the flexibility in joining and leaving the group), and thus, resolve a prominent open problem posed by previous works. Moreover, we achieve this non-trivial feat in a relatively simple manner. Starting with Libert et al.'s fully static construction (Eurocrypt 2016) - which is arguably the most efficient lattice-based group signature to date, we introduce simple-but-insightful tweaks that allow to upgrade it directly into the fully dynamic setting. More startlingly, our scheme even produces slightly shorter signatures than the former, thanks to an adaptation of a technique proposed by Ling et al. (PKC 2013), allowing to prove inequalities in zero-knowledge. Our design approach consists of upgrading Libert et al.'s static construction (EUROCRYPT 2016) - which is arguably the most efficient lattice-based group signature to date - into the fully dynamic setting. Somewhat surprisingly, our scheme produces slightly shorter signatures than the former, thanks to a new technique for proving inequality in zero-knowledge without relying on any inequality check. The scheme satisfies the strong security requirements of Bootle et al.'s model (ACNS 2016), under the Short Integer Solution (SIS) and the Learning With Errors (LWE) assumptions. Furthermore, we demonstrate how to equip the obtained group signature scheme with the deniability functionality in a simple way. This attractive functionality, put forward by Ishida et al. (CANS 2016), enables the tracing authority to provide an evidence that a given user is not the owner of a signature in question. In the process, we design a zero-knowledge protocol for proving that a given LWE ciphertext does not decrypt to a particular message.
2011.05158
Pablo Samuel Castro
Pablo Samuel Castro
GANterpretations
In 4th Workshop on Machine Learning for Creativity and Design at NeurIPS 2020, Vancouver, Canada
null
null
null
cs.SD cs.AI cs.LG eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Since the introduction of Generative Adversarial Networks (GANs) [Goodfellow et al., 2014] there has been a regular stream of both technical advances (e.g., Arjovsky et al. [2017]) and creative uses of these generative models (e.g., [Karras et al., 2019, Zhu et al., 2017, Jin et al., 2017]). In this work we propose an approach for using the power of GANs to automatically generate videos to accompany audio recordings by aligning to spectral properties of the recording. This allows musicians to explore new forms of multi-modal creative expression, where musical performance can induce an AI-generated musical video that is guided by said performance, as well as a medium for creating a visual narrative to follow a storyline (similar to what was proposed by Frosst and Kereliuk [2019]).
[ { "created": "Fri, 6 Nov 2020 19:08:40 GMT", "version": "v1" } ]
2020-11-11
[ [ "Castro", "Pablo Samuel", "" ] ]
Since the introduction of Generative Adversarial Networks (GANs) [Goodfellow et al., 2014] there has been a regular stream of both technical advances (e.g., Arjovsky et al. [2017]) and creative uses of these generative models (e.g., [Karras et al., 2019, Zhu et al., 2017, Jin et al., 2017]). In this work we propose an approach for using the power of GANs to automatically generate videos to accompany audio recordings by aligning to spectral properties of the recording. This allows musicians to explore new forms of multi-modal creative expression, where musical performance can induce an AI-generated musical video that is guided by said performance, as well as a medium for creating a visual narrative to follow a storyline (similar to what was proposed by Frosst and Kereliuk [2019]).
1409.2485
Bernhard Rumpe
Shahar Maoz, Jan Oliver Ringert, Bernhard Rumpe
A Manifesto for Semantic Model Differencing
10 pages, 7 figures. arXiv admin note: text overlap with arXiv:1409.2355, arXiv:1409.2352
Proceedings Int. Workshop on Models and Evolution (ME'10), co-located with MoDELS'10. J. Dingel and A. Solberg (Eds.): MoDELS Workshops, LNCS 6627, pp. 194 - 203, 2010
null
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Models are heavily used in software engineering and together with their systems they evolve over time. Thus, managing their changes is an important challenge for system maintainability. Existing approaches to model differencing concentrate on heuristics matching between model elements and on finding and presenting differences at a concrete or abstract syntactic level. While showing some success, these approaches are inherently limited to comparing syntactic structures. This paper is a manifesto for research on semantic model differencing. We present our vision to develop semantic diff operators for model comparisons: operators whose input consists of two models and whose output is a set of diff witnesses, instances of one model that are not instances of the other. In particular, if the models are syntactically different but there are no diff witnesses, the models are semantically equivalent. We demonstrate our vision using two concrete diff operators, for class diagrams and for activity diagrams. We motivate the use of semantic diff operators, brie y discuss the algorithms to compute them, list related challenges, and show their application and potential use as new fundamental building blocks for change management in model-driven engineering.
[ { "created": "Mon, 8 Sep 2014 14:34:13 GMT", "version": "v1" } ]
2014-09-10
[ [ "Maoz", "Shahar", "" ], [ "Ringert", "Jan Oliver", "" ], [ "Rumpe", "Bernhard", "" ] ]
Models are heavily used in software engineering and together with their systems they evolve over time. Thus, managing their changes is an important challenge for system maintainability. Existing approaches to model differencing concentrate on heuristics matching between model elements and on finding and presenting differences at a concrete or abstract syntactic level. While showing some success, these approaches are inherently limited to comparing syntactic structures. This paper is a manifesto for research on semantic model differencing. We present our vision to develop semantic diff operators for model comparisons: operators whose input consists of two models and whose output is a set of diff witnesses, instances of one model that are not instances of the other. In particular, if the models are syntactically different but there are no diff witnesses, the models are semantically equivalent. We demonstrate our vision using two concrete diff operators, for class diagrams and for activity diagrams. We motivate the use of semantic diff operators, brie y discuss the algorithms to compute them, list related challenges, and show their application and potential use as new fundamental building blocks for change management in model-driven engineering.
1509.01624
Andrew Knyazev
Dong Tian, Hassan Mansour, Andrew Knyazev, Anthony Vetro
Chebyshev and Conjugate Gradient Filters for Graph Image Denoising
6 pages, 6 figures, accepted to 2014 IEEE International Conference on Multimedia and Expo Workshops (ICMEW)
Multimedia and Expo Workshops (ICMEW), 2014 IEEE International Conference on, vol., no., pp.1-6, 14-18 July 2014
10.1109/ICMEW.2014.6890711
MERL TR2014-062
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In 3D image/video acquisition, different views are often captured with varying noise levels across the views. In this paper, we propose a graph-based image enhancement technique that uses a higher quality view to enhance a degraded view. A depth map is utilized as auxiliary information to match the perspectives of the two views. Our method performs graph-based filtering of the noisy image by directly computing a projection of the image to be filtered onto a lower dimensional Krylov subspace of the graph Laplacian. We discuss two graph spectral denoising methods: first using Chebyshev polynomials, and second using iterations of the conjugate gradient algorithm. Our framework generalizes previously known polynomial graph filters, and we demonstrate through numerical simulations that our proposed technique produces subjectively cleaner images with about 1-3 dB improvement in PSNR over existing polynomial graph filters.
[ { "created": "Fri, 4 Sep 2015 22:22:25 GMT", "version": "v1" } ]
2015-09-08
[ [ "Tian", "Dong", "" ], [ "Mansour", "Hassan", "" ], [ "Knyazev", "Andrew", "" ], [ "Vetro", "Anthony", "" ] ]
In 3D image/video acquisition, different views are often captured with varying noise levels across the views. In this paper, we propose a graph-based image enhancement technique that uses a higher quality view to enhance a degraded view. A depth map is utilized as auxiliary information to match the perspectives of the two views. Our method performs graph-based filtering of the noisy image by directly computing a projection of the image to be filtered onto a lower dimensional Krylov subspace of the graph Laplacian. We discuss two graph spectral denoising methods: first using Chebyshev polynomials, and second using iterations of the conjugate gradient algorithm. Our framework generalizes previously known polynomial graph filters, and we demonstrate through numerical simulations that our proposed technique produces subjectively cleaner images with about 1-3 dB improvement in PSNR over existing polynomial graph filters.
2111.14666
Assem Sadek
Assem Sadek, Guillaume Bono, Boris Chidlovskii, Christian Wolf
An in-depth experimental study of sensor usage and visual reasoning of robots navigating in real environments
null
null
null
null
cs.AI cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Visual navigation by mobile robots is classically tackled through SLAM plus optimal planning, and more recently through end-to-end training of policies implemented as deep networks. While the former are often limited to waypoint planning, but have proven their efficiency even on real physical environments, the latter solutions are most frequently employed in simulation, but have been shown to be able learn more complex visual reasoning, involving complex semantical regularities. Navigation by real robots in physical environments is still an open problem. End-to-end training approaches have been thoroughly tested in simulation only, with experiments involving real robots being restricted to rare performance evaluations in simplified laboratory conditions. In this work we present an in-depth study of the performance and reasoning capacities of real physical agents, trained in simulation and deployed to two different physical environments. Beyond benchmarking, we provide insights into the generalization capabilities of different agents training in different conditions. We visualize sensor usage and the importance of the different types of signals. We show, that for the PointGoal task, an agent pre-trained on wide variety of tasks and fine-tuned on a simulated version of the target environment can reach competitive performance without modelling any sim2real transfer, i.e. by deploying the trained agent directly from simulation to a real physical robot.
[ { "created": "Mon, 29 Nov 2021 16:27:29 GMT", "version": "v1" } ]
2021-11-30
[ [ "Sadek", "Assem", "" ], [ "Bono", "Guillaume", "" ], [ "Chidlovskii", "Boris", "" ], [ "Wolf", "Christian", "" ] ]
Visual navigation by mobile robots is classically tackled through SLAM plus optimal planning, and more recently through end-to-end training of policies implemented as deep networks. While the former are often limited to waypoint planning, but have proven their efficiency even on real physical environments, the latter solutions are most frequently employed in simulation, but have been shown to be able learn more complex visual reasoning, involving complex semantical regularities. Navigation by real robots in physical environments is still an open problem. End-to-end training approaches have been thoroughly tested in simulation only, with experiments involving real robots being restricted to rare performance evaluations in simplified laboratory conditions. In this work we present an in-depth study of the performance and reasoning capacities of real physical agents, trained in simulation and deployed to two different physical environments. Beyond benchmarking, we provide insights into the generalization capabilities of different agents training in different conditions. We visualize sensor usage and the importance of the different types of signals. We show, that for the PointGoal task, an agent pre-trained on wide variety of tasks and fine-tuned on a simulated version of the target environment can reach competitive performance without modelling any sim2real transfer, i.e. by deploying the trained agent directly from simulation to a real physical robot.
2307.00758
Wenting Tang
Wenting Tang, Xingxing Wei, Bo Li (Beijing Key Laboratory of Digital Media, School of Computer Science and Engineering, Beihang University, Beijing, China)
Structured Network Pruning by Measuring Filter-wise Interactions
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Structured network pruning is a practical approach to reduce computation cost directly while retaining the CNNs' generalization performance in real applications. However, identifying redundant filters is a core problem in structured network pruning, and current redundancy criteria only focus on individual filters' attributes. When pruning sparsity increases, these redundancy criteria are not effective or efficient enough. Since the filter-wise interaction also contributes to the CNN's prediction accuracy, we integrate the filter-wise interaction into the redundancy criterion. In our criterion, we introduce the filter importance and filter utilization strength to reflect the decision ability of individual and multiple filters. Utilizing this new redundancy criterion, we propose a structured network pruning approach SNPFI (Structured Network Pruning by measuring Filter-wise Interaction). During the pruning, the SNPFI can automatically assign the proper sparsity based on the filter utilization strength and eliminate the useless filters by filter importance. After the pruning, the SNPFI can recover pruned model's performance effectively without iterative training by minimizing the interaction difference. We empirically demonstrate the effectiveness of the SNPFI with several commonly used CNN models, including AlexNet, MobileNetv1, and ResNet-50, on various image classification datasets, including MNIST, CIFAR-10, and ImageNet. For all experimental CNN models, nearly 60% of computation is reduced in a network compression while the classification accuracy remains.
[ { "created": "Mon, 3 Jul 2023 05:26:05 GMT", "version": "v1" } ]
2023-07-04
[ [ "Tang", "Wenting", "", "Beijing Key Laboratory of Digital\n Media, School of Computer Science and Engineering, Beihang University,\n Beijing, China" ], [ "Wei", "Xingxing", "", "Beijing Key Laboratory of Digital\n Media, School of Computer Science and Engineering, Beihang University,\n Beijing, China" ], [ "Li", "Bo", "", "Beijing Key Laboratory of Digital\n Media, School of Computer Science and Engineering, Beihang University,\n Beijing, China" ] ]
Structured network pruning is a practical approach to reduce computation cost directly while retaining the CNNs' generalization performance in real applications. However, identifying redundant filters is a core problem in structured network pruning, and current redundancy criteria only focus on individual filters' attributes. When pruning sparsity increases, these redundancy criteria are not effective or efficient enough. Since the filter-wise interaction also contributes to the CNN's prediction accuracy, we integrate the filter-wise interaction into the redundancy criterion. In our criterion, we introduce the filter importance and filter utilization strength to reflect the decision ability of individual and multiple filters. Utilizing this new redundancy criterion, we propose a structured network pruning approach SNPFI (Structured Network Pruning by measuring Filter-wise Interaction). During the pruning, the SNPFI can automatically assign the proper sparsity based on the filter utilization strength and eliminate the useless filters by filter importance. After the pruning, the SNPFI can recover pruned model's performance effectively without iterative training by minimizing the interaction difference. We empirically demonstrate the effectiveness of the SNPFI with several commonly used CNN models, including AlexNet, MobileNetv1, and ResNet-50, on various image classification datasets, including MNIST, CIFAR-10, and ImageNet. For all experimental CNN models, nearly 60% of computation is reduced in a network compression while the classification accuracy remains.
2211.02682
Jacob Wahlgren
Jacob Wahlgren, Maya Gokhale, Ivy B. Peng
Evaluating Emerging CXL-enabled Memory Pooling for HPC Systems
10 pages, 13 figures. Accepted for publication in Workshop on Memory Centric High Performance Computing (MCHPC'22) at SC22
null
10.1109/MCHPC56545.2022.00007
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Current HPC systems provide memory resources that are statically configured and tightly coupled with compute nodes. However, workloads on HPC systems are evolving. Diverse workloads lead to a need for configurable memory resources to achieve high performance and utilization. In this study, we evaluate a memory subsystem design leveraging CXL-enabled memory pooling. Two promising use cases of composable memory subsystems are studied -- fine-grained capacity provisioning and scalable bandwidth provisioning. We developed an emulator to explore the performance impact of various memory compositions. We also provide a profiler to identify the memory usage patterns in applications and their optimization opportunities. Seven scientific and six graph applications are evaluated on various emulated memory configurations. Three out of seven scientific applications had less than 10% performance impact when the pooled memory backed 75% of their memory footprint. The results also show that a dynamically configured high-bandwidth system can effectively support bandwidth-intensive unstructured mesh-based applications like OpenFOAM. Finally, we identify interference through shared memory pools as a practical challenge for adoption on HPC systems.
[ { "created": "Fri, 4 Nov 2022 18:03:11 GMT", "version": "v1" } ]
2023-03-23
[ [ "Wahlgren", "Jacob", "" ], [ "Gokhale", "Maya", "" ], [ "Peng", "Ivy B.", "" ] ]
Current HPC systems provide memory resources that are statically configured and tightly coupled with compute nodes. However, workloads on HPC systems are evolving. Diverse workloads lead to a need for configurable memory resources to achieve high performance and utilization. In this study, we evaluate a memory subsystem design leveraging CXL-enabled memory pooling. Two promising use cases of composable memory subsystems are studied -- fine-grained capacity provisioning and scalable bandwidth provisioning. We developed an emulator to explore the performance impact of various memory compositions. We also provide a profiler to identify the memory usage patterns in applications and their optimization opportunities. Seven scientific and six graph applications are evaluated on various emulated memory configurations. Three out of seven scientific applications had less than 10% performance impact when the pooled memory backed 75% of their memory footprint. The results also show that a dynamically configured high-bandwidth system can effectively support bandwidth-intensive unstructured mesh-based applications like OpenFOAM. Finally, we identify interference through shared memory pools as a practical challenge for adoption on HPC systems.
cs/0611036
Annie Bouyer
Anne Durand (CRAI), Pierre Drap (CRAI), Elise Meyer (CRAI), Pierre Grussenmeyer (CRAI), Jean-Pierre Perrin (CRAI)
Intra-site Level Cultural Heritage Documentation: Combination of Survey, Modeling and Imagery Data in a Web Information System
null
null
null
null
cs.DL
null
Cultural heritage documentation induces the use of computerized techniques to manage and preserve the information produced. Geographical information systems have proved their potentialities in this scope, but they are not always adapted for the management of features at the scale of a particular archaeological site. Moreover, computer applications in archaeology are often technology driven and software constrained. Thus, we propose a tool that tries to avoid these difficulties. We are developing an information system that works over the Internet and that is joined with a web site. Aims are to assist the work of archaeological sites managers and to be a documentation tool about these sites, dedicated to everyone. We devote therefore our system both to the professionals who are in charge of the site, and to the general public who visits it or who wants to have information on it. The system permits to do exploratory analyses of the data, especially at spatial and temporal levels. We propose to record metadata about the archaeological features in XML and to access these features through interactive 2D and 3D representations, and through queries systems (keywords and images). The 2D images, photos, or vectors are generated in SVG, while 3D models are generated in X3D. Archaeological features are also automatically integrated in a MySQL database. The web site is an exchange platform with the information system and is written in PHP. Our first application case is the medieval castle of Vianden, Luxembourg.
[ { "created": "Wed, 8 Nov 2006 17:35:52 GMT", "version": "v1" } ]
2007-05-23
[ [ "Durand", "Anne", "", "CRAI" ], [ "Drap", "Pierre", "", "CRAI" ], [ "Meyer", "Elise", "", "CRAI" ], [ "Grussenmeyer", "Pierre", "", "CRAI" ], [ "Perrin", "Jean-Pierre", "", "CRAI" ] ]
Cultural heritage documentation induces the use of computerized techniques to manage and preserve the information produced. Geographical information systems have proved their potentialities in this scope, but they are not always adapted for the management of features at the scale of a particular archaeological site. Moreover, computer applications in archaeology are often technology driven and software constrained. Thus, we propose a tool that tries to avoid these difficulties. We are developing an information system that works over the Internet and that is joined with a web site. Aims are to assist the work of archaeological sites managers and to be a documentation tool about these sites, dedicated to everyone. We devote therefore our system both to the professionals who are in charge of the site, and to the general public who visits it or who wants to have information on it. The system permits to do exploratory analyses of the data, especially at spatial and temporal levels. We propose to record metadata about the archaeological features in XML and to access these features through interactive 2D and 3D representations, and through queries systems (keywords and images). The 2D images, photos, or vectors are generated in SVG, while 3D models are generated in X3D. Archaeological features are also automatically integrated in a MySQL database. The web site is an exchange platform with the information system and is written in PHP. Our first application case is the medieval castle of Vianden, Luxembourg.
1501.04587
Naiyan Wang
Naiyan Wang, Siyi Li, Abhinav Gupta, Dit-Yan Yeung
Transferring Rich Feature Hierarchies for Robust Visual Tracking
null
null
null
null
cs.CV cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Convolutional neural network (CNN) models have demonstrated great success in various computer vision tasks including image classification and object detection. However, some equally important tasks such as visual tracking remain relatively unexplored. We believe that a major hurdle that hinders the application of CNN to visual tracking is the lack of properly labeled training data. While existing applications that liberate the power of CNN often need an enormous amount of training data in the order of millions, visual tracking applications typically have only one labeled example in the first frame of each video. We address this research issue here by pre-training a CNN offline and then transferring the rich feature hierarchies learned to online tracking. The CNN is also fine-tuned during online tracking to adapt to the appearance of the tracked target specified in the first video frame. To fit the characteristics of object tracking, we first pre-train the CNN to recognize what is an object, and then propose to generate a probability map instead of producing a simple class label. Using two challenging open benchmarks for performance evaluation, our proposed tracker has demonstrated substantial improvement over other state-of-the-art trackers.
[ { "created": "Mon, 19 Jan 2015 18:54:34 GMT", "version": "v1" }, { "created": "Thu, 23 Apr 2015 06:18:09 GMT", "version": "v2" } ]
2015-04-24
[ [ "Wang", "Naiyan", "" ], [ "Li", "Siyi", "" ], [ "Gupta", "Abhinav", "" ], [ "Yeung", "Dit-Yan", "" ] ]
Convolutional neural network (CNN) models have demonstrated great success in various computer vision tasks including image classification and object detection. However, some equally important tasks such as visual tracking remain relatively unexplored. We believe that a major hurdle that hinders the application of CNN to visual tracking is the lack of properly labeled training data. While existing applications that liberate the power of CNN often need an enormous amount of training data in the order of millions, visual tracking applications typically have only one labeled example in the first frame of each video. We address this research issue here by pre-training a CNN offline and then transferring the rich feature hierarchies learned to online tracking. The CNN is also fine-tuned during online tracking to adapt to the appearance of the tracked target specified in the first video frame. To fit the characteristics of object tracking, we first pre-train the CNN to recognize what is an object, and then propose to generate a probability map instead of producing a simple class label. Using two challenging open benchmarks for performance evaluation, our proposed tracker has demonstrated substantial improvement over other state-of-the-art trackers.
1805.02276
Nikolaos Polatidis Dr
Elias Pimenidis, Nikolaos Polatidis, Haralambos Mouratidis
Mobile recommender systems: Identifying the major concepts
null
null
null
null
cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper identifies the factors that have an impact on mobile recommender systems. Recommender systems have become a technology that has been widely used by various online applications in situations where there is an information overload problem. Numerous applications such as e-Commerce, video platforms and social networks provide personalized recommendations to their users and this has improved the user experience and vendor revenues. The development of recommender systems has been focused mostly on the proposal of new algorithms that provide more accurate recommendations. However, the use of mobile devices and the rapid growth of the internet and networking infrastructure has brought the necessity of using mobile recommender systems. The links between web and mobile recommender systems are described along with how the recommendations in mobile environments can be improved. This work is focused on identifying the links between web and mobile recommender systems and to provide solid future directions that aim to lead in a more integrated mobile recommendation domain.
[ { "created": "Sun, 6 May 2018 20:46:55 GMT", "version": "v1" } ]
2018-05-08
[ [ "Pimenidis", "Elias", "" ], [ "Polatidis", "Nikolaos", "" ], [ "Mouratidis", "Haralambos", "" ] ]
This paper identifies the factors that have an impact on mobile recommender systems. Recommender systems have become a technology that has been widely used by various online applications in situations where there is an information overload problem. Numerous applications such as e-Commerce, video platforms and social networks provide personalized recommendations to their users and this has improved the user experience and vendor revenues. The development of recommender systems has been focused mostly on the proposal of new algorithms that provide more accurate recommendations. However, the use of mobile devices and the rapid growth of the internet and networking infrastructure has brought the necessity of using mobile recommender systems. The links between web and mobile recommender systems are described along with how the recommendations in mobile environments can be improved. This work is focused on identifying the links between web and mobile recommender systems and to provide solid future directions that aim to lead in a more integrated mobile recommendation domain.
1609.02368
William Smith
Alassane Seck, William A. P. Smith, Arnaud Dessein, Bernard Tiddeman, Hannah Dee and Abhishek Dutta
Ear-to-ear Capture of Facial Intrinsics
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a practical approach to capturing ear-to-ear face models comprising both 3D meshes and intrinsic textures (i.e. diffuse and specular albedo). Our approach is a hybrid of geometric and photometric methods and requires no geometric calibration. Photometric measurements made in a lightstage are used to estimate view dependent high resolution normal maps. We overcome the problem of having a single photometric viewpoint by capturing in multiple poses. We use uncalibrated multiview stereo to estimate a coarse base mesh to which the photometric views are registered. We propose a novel approach to robustly stitching surface normal and intrinsic texture data into a seamless, complete and highly detailed face model. The resulting relightable models provide photorealistic renderings in any view.
[ { "created": "Thu, 8 Sep 2016 10:24:44 GMT", "version": "v1" } ]
2016-09-09
[ [ "Seck", "Alassane", "" ], [ "Smith", "William A. P.", "" ], [ "Dessein", "Arnaud", "" ], [ "Tiddeman", "Bernard", "" ], [ "Dee", "Hannah", "" ], [ "Dutta", "Abhishek", "" ] ]
We present a practical approach to capturing ear-to-ear face models comprising both 3D meshes and intrinsic textures (i.e. diffuse and specular albedo). Our approach is a hybrid of geometric and photometric methods and requires no geometric calibration. Photometric measurements made in a lightstage are used to estimate view dependent high resolution normal maps. We overcome the problem of having a single photometric viewpoint by capturing in multiple poses. We use uncalibrated multiview stereo to estimate a coarse base mesh to which the photometric views are registered. We propose a novel approach to robustly stitching surface normal and intrinsic texture data into a seamless, complete and highly detailed face model. The resulting relightable models provide photorealistic renderings in any view.
2106.02831
Amir Jalaly Bidgoly
Fahimeh Soltaninejad, Amir Jalaly Bidgoly
A novel method for recommendation systems using invasive weed optimization
null
null
null
null
cs.IR
http://creativecommons.org/licenses/by/4.0/
One of the popular approaches in recommendation systems is Collaborative Filtering (CF). The most significant step in CF is choosing the appropriate set of users. For this purpose, similarity measures are usually used for computing the similarity between a specific user and the other users. This paper proposes a new invasive weed optimization (IWO) based CF approach that uses users' context to identify important and effective users set. By using a newly defined similarity measure based on both rating values and a measure values called confidence, the proposed approach calculates the similarity between users and thus identifies and filters the most similar users to a specific user. It then uses IWO to calculate the importance degree of users and finally, by using the identified important users and their importance degrees it predicts unknown ratings. To evaluate the proposed method, several experiments have been performed on two known real world datasets and the results show that the proposed method improves the state of the art results up to 15% in terms of Root Mean Square Error (RMSE) and Mean Absolute Error (MAE).
[ { "created": "Sat, 5 Jun 2021 08:12:41 GMT", "version": "v1" } ]
2021-06-08
[ [ "Soltaninejad", "Fahimeh", "" ], [ "Bidgoly", "Amir Jalaly", "" ] ]
One of the popular approaches in recommendation systems is Collaborative Filtering (CF). The most significant step in CF is choosing the appropriate set of users. For this purpose, similarity measures are usually used for computing the similarity between a specific user and the other users. This paper proposes a new invasive weed optimization (IWO) based CF approach that uses users' context to identify important and effective users set. By using a newly defined similarity measure based on both rating values and a measure values called confidence, the proposed approach calculates the similarity between users and thus identifies and filters the most similar users to a specific user. It then uses IWO to calculate the importance degree of users and finally, by using the identified important users and their importance degrees it predicts unknown ratings. To evaluate the proposed method, several experiments have been performed on two known real world datasets and the results show that the proposed method improves the state of the art results up to 15% in terms of Root Mean Square Error (RMSE) and Mean Absolute Error (MAE).
1106.4213
Erlin Yao
Erlin Yao, Mingyu Chen, Rui Wang, Wenli Zhang, Guangming Tan
A New and Efficient Algorithm-Based Fault Tolerance Scheme for A Million Way Parallelism
11 pages, 8 figures, 1 table, submitted to conference SC 2011
null
null
null
cs.DC
http://creativecommons.org/licenses/by-nc-sa/3.0/
Fault tolerance overhead of high performance computing (HPC) applications is becoming critical to the efficient utilization of HPC systems at large scale. HPC applications typically tolerate fail-stop failures by checkpointing. Another promising method is in the algorithm level, called algorithmic recovery. These two methods can achieve high efficiency when the system scale is not very large, but will both lose their effectiveness when systems approach the scale of Exaflops, where the number of processors including in system is expected to achieve one million. This paper develops a new and efficient algorithm-based fault tolerance scheme for HPC applications. When failure occurs during the execution, we do not stop to wait for the recovery of corrupted data, but replace them with the corresponding redundant data and continue the execution. A background accelerated recovery method is also proposed to rebuild redundancy to tolerate multiple times of failures during the execution. To demonstrate the feasibility of our new scheme, we have incorporated it to the High Performance Linpack. Theoretical analysis demonstrates that our new fault tolerance scheme can still be effective even when the system scale achieves the Exaflops. Experiment using SiCortex SC5832 verifies the feasibility of the scheme, and indicates that the advantage of our scheme can be observable even in a small scale.
[ { "created": "Tue, 21 Jun 2011 14:24:43 GMT", "version": "v1" } ]
2011-06-22
[ [ "Yao", "Erlin", "" ], [ "Chen", "Mingyu", "" ], [ "Wang", "Rui", "" ], [ "Zhang", "Wenli", "" ], [ "Tan", "Guangming", "" ] ]
Fault tolerance overhead of high performance computing (HPC) applications is becoming critical to the efficient utilization of HPC systems at large scale. HPC applications typically tolerate fail-stop failures by checkpointing. Another promising method is in the algorithm level, called algorithmic recovery. These two methods can achieve high efficiency when the system scale is not very large, but will both lose their effectiveness when systems approach the scale of Exaflops, where the number of processors including in system is expected to achieve one million. This paper develops a new and efficient algorithm-based fault tolerance scheme for HPC applications. When failure occurs during the execution, we do not stop to wait for the recovery of corrupted data, but replace them with the corresponding redundant data and continue the execution. A background accelerated recovery method is also proposed to rebuild redundancy to tolerate multiple times of failures during the execution. To demonstrate the feasibility of our new scheme, we have incorporated it to the High Performance Linpack. Theoretical analysis demonstrates that our new fault tolerance scheme can still be effective even when the system scale achieves the Exaflops. Experiment using SiCortex SC5832 verifies the feasibility of the scheme, and indicates that the advantage of our scheme can be observable even in a small scale.
2406.11505
Gustavo Escobedo
Gustavo Escobedo, Marta Moscati, Peter Muellner, Simone Kopeinik, Dominik Kowald, Elisabeth Lex and Markus Schedl
Making Alice Appear Like Bob: A Probabilistic Preference Obfuscation Method For Implicit Feedback Recommendation Models
null
null
null
null
cs.IR
http://creativecommons.org/licenses/by/4.0/
Users' interaction or preference data used in recommender systems carry the risk of unintentionally revealing users' private attributes (e.g., gender or race). This risk becomes particularly concerning when the training data contains user preferences that can be used to infer these attributes, especially if they align with common stereotypes. This major privacy issue allows malicious attackers or other third parties to infer users' protected attributes. Previous efforts to address this issue have added or removed parts of users' preferences prior to or during model training to improve privacy, which often leads to decreases in recommendation accuracy. In this work, we introduce SBO, a novel probabilistic obfuscation method for user preference data designed to improve the accuracy--privacy trade-off for such recommendation scenarios. We apply SBO to three state-of-the-art recommendation models (i.e., BPR, MultVAE, and LightGCN) and two popular datasets (i.e., MovieLens-1M and LFM-2B). Our experiments reveal that SBO outperforms comparable approaches with respect to the accuracy--privacy trade-off. Specifically, we can reduce the leakage of users' protected attributes while maintaining on-par recommendation accuracy.
[ { "created": "Mon, 17 Jun 2024 13:05:36 GMT", "version": "v1" } ]
2024-06-18
[ [ "Escobedo", "Gustavo", "" ], [ "Moscati", "Marta", "" ], [ "Muellner", "Peter", "" ], [ "Kopeinik", "Simone", "" ], [ "Kowald", "Dominik", "" ], [ "Lex", "Elisabeth", "" ], [ "Schedl", "Markus", "" ] ]
Users' interaction or preference data used in recommender systems carry the risk of unintentionally revealing users' private attributes (e.g., gender or race). This risk becomes particularly concerning when the training data contains user preferences that can be used to infer these attributes, especially if they align with common stereotypes. This major privacy issue allows malicious attackers or other third parties to infer users' protected attributes. Previous efforts to address this issue have added or removed parts of users' preferences prior to or during model training to improve privacy, which often leads to decreases in recommendation accuracy. In this work, we introduce SBO, a novel probabilistic obfuscation method for user preference data designed to improve the accuracy--privacy trade-off for such recommendation scenarios. We apply SBO to three state-of-the-art recommendation models (i.e., BPR, MultVAE, and LightGCN) and two popular datasets (i.e., MovieLens-1M and LFM-2B). Our experiments reveal that SBO outperforms comparable approaches with respect to the accuracy--privacy trade-off. Specifically, we can reduce the leakage of users' protected attributes while maintaining on-par recommendation accuracy.
2210.17111
Tongyue He
Tongyue He, Yiming Chen, Junxin Chen, Wei Wang, Yicong Zhou
SEVGGNet-LSTM: a fused deep learning model for ECG classification
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents a fused deep learning algorithm for ECG classification. It takes advantages of the combined convolutional and recurrent neural network for ECG classification, and the weight allocation capability of attention mechanism. The input ECG signals are firstly segmented and normalized, and then fed into the combined VGG and LSTM network for feature extraction and classification. An attention mechanism (SE block) is embedded into the core network for increasing the weight of important features. Two databases from different sources and devices are employed for performance validation, and the results well demonstrate the effectiveness and robustness of the proposed algorithm.
[ { "created": "Mon, 31 Oct 2022 07:36:48 GMT", "version": "v1" } ]
2022-11-01
[ [ "He", "Tongyue", "" ], [ "Chen", "Yiming", "" ], [ "Chen", "Junxin", "" ], [ "Wang", "Wei", "" ], [ "Zhou", "Yicong", "" ] ]
This paper presents a fused deep learning algorithm for ECG classification. It takes advantages of the combined convolutional and recurrent neural network for ECG classification, and the weight allocation capability of attention mechanism. The input ECG signals are firstly segmented and normalized, and then fed into the combined VGG and LSTM network for feature extraction and classification. An attention mechanism (SE block) is embedded into the core network for increasing the weight of important features. Two databases from different sources and devices are employed for performance validation, and the results well demonstrate the effectiveness and robustness of the proposed algorithm.
2104.13114
Chaosheng Dong
Chaosheng Dong, Xiaojie Jin, Weihao Gao, Yijia Wang, Hongyi Zhang, Xiang Wu, Jianchao Yang, Xiaobing Liu
One Backward from Ten Forward, Subsampling for Large-Scale Deep Learning
13 pages
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep learning models in large-scale machine learning systems are often continuously trained with enormous data from production environments. The sheer volume of streaming training data poses a significant challenge to real-time training subsystems and ad-hoc sampling is the standard practice. Our key insight is that these deployed ML systems continuously perform forward passes on data instances during inference, but ad-hoc sampling does not take advantage of this substantial computational effort. Therefore, we propose to record a constant amount of information per instance from these forward passes. The extra information measurably improves the selection of which data instances should participate in forward and backward passes. A novel optimization framework is proposed to analyze this problem and we provide an efficient approximation algorithm under the framework of Mini-batch gradient descent as a practical solution. We also demonstrate the effectiveness of our framework and algorithm on several large-scale classification and regression tasks, when compared with competitive baselines widely used in industry.
[ { "created": "Tue, 27 Apr 2021 11:29:02 GMT", "version": "v1" } ]
2021-04-28
[ [ "Dong", "Chaosheng", "" ], [ "Jin", "Xiaojie", "" ], [ "Gao", "Weihao", "" ], [ "Wang", "Yijia", "" ], [ "Zhang", "Hongyi", "" ], [ "Wu", "Xiang", "" ], [ "Yang", "Jianchao", "" ], [ "Liu", "Xiaobing", "" ] ]
Deep learning models in large-scale machine learning systems are often continuously trained with enormous data from production environments. The sheer volume of streaming training data poses a significant challenge to real-time training subsystems and ad-hoc sampling is the standard practice. Our key insight is that these deployed ML systems continuously perform forward passes on data instances during inference, but ad-hoc sampling does not take advantage of this substantial computational effort. Therefore, we propose to record a constant amount of information per instance from these forward passes. The extra information measurably improves the selection of which data instances should participate in forward and backward passes. A novel optimization framework is proposed to analyze this problem and we provide an efficient approximation algorithm under the framework of Mini-batch gradient descent as a practical solution. We also demonstrate the effectiveness of our framework and algorithm on several large-scale classification and regression tasks, when compared with competitive baselines widely used in industry.
2302.06541
Maximilian Mozes
Maximilian Mozes, Jessica Hoffmann, Katrin Tomanek, Muhamed Kouate, Nithum Thain, Ann Yuan, Tolga Bolukbasi, Lucas Dixon
Towards Agile Text Classifiers for Everyone
Findings of EMNLP 2023
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Text-based safety classifiers are widely used for content moderation and increasingly to tune generative language model behavior - a topic of growing concern for the safety of digital assistants and chatbots. However, different policies require different classifiers, and safety policies themselves improve from iteration and adaptation. This paper introduces and evaluates methods for agile text classification, whereby classifiers are trained using small, targeted datasets that can be quickly developed for a particular policy. Experimenting with 7 datasets from three safety-related domains, comprising 15 annotation schemes, led to our key finding: prompt-tuning large language models, like PaLM 62B, with a labeled dataset of as few as 80 examples can achieve state-of-the-art performance. We argue that this enables a paradigm shift for text classification, especially for models supporting safer online discourse. Instead of collecting millions of examples to attempt to create universal safety classifiers over months or years, classifiers could be tuned using small datasets, created by individuals or small organizations, tailored for specific use cases, and iterated on and adapted in the time-span of a day.
[ { "created": "Mon, 13 Feb 2023 17:34:13 GMT", "version": "v1" }, { "created": "Sat, 21 Oct 2023 11:49:09 GMT", "version": "v2" } ]
2023-10-24
[ [ "Mozes", "Maximilian", "" ], [ "Hoffmann", "Jessica", "" ], [ "Tomanek", "Katrin", "" ], [ "Kouate", "Muhamed", "" ], [ "Thain", "Nithum", "" ], [ "Yuan", "Ann", "" ], [ "Bolukbasi", "Tolga", "" ], [ "Dixon", "Lucas", "" ] ]
Text-based safety classifiers are widely used for content moderation and increasingly to tune generative language model behavior - a topic of growing concern for the safety of digital assistants and chatbots. However, different policies require different classifiers, and safety policies themselves improve from iteration and adaptation. This paper introduces and evaluates methods for agile text classification, whereby classifiers are trained using small, targeted datasets that can be quickly developed for a particular policy. Experimenting with 7 datasets from three safety-related domains, comprising 15 annotation schemes, led to our key finding: prompt-tuning large language models, like PaLM 62B, with a labeled dataset of as few as 80 examples can achieve state-of-the-art performance. We argue that this enables a paradigm shift for text classification, especially for models supporting safer online discourse. Instead of collecting millions of examples to attempt to create universal safety classifiers over months or years, classifiers could be tuned using small datasets, created by individuals or small organizations, tailored for specific use cases, and iterated on and adapted in the time-span of a day.
2107.04953
Jonathan Stray
Jonathan Stray
Designing Recommender Systems to Depolarize
to appear in First Monday, September 2021
null
null
null
cs.IR cs.CY cs.SI
http://creativecommons.org/licenses/by-nc-nd/4.0/
Polarization is implicated in the erosion of democracy and the progression to violence, which makes the polarization properties of large algorithmic content selection systems (recommender systems) a matter of concern for peace and security. While algorithm-driven social media does not seem to be a primary driver of polarization at the country level, it could be a useful intervention point in polarized societies. This paper examines algorithmic depolarization interventions with the goal of conflict transformation: not suppressing or eliminating conflict but moving towards more constructive conflict. Algorithmic intervention is considered at three stages: which content is available (moderation), how content is selected and personalized (ranking), and content presentation and controls (user interface). Empirical studies of online conflict suggest that the exposure diversity intervention proposed as an antidote to "filter bubbles" can be improved and can even worsen polarization under some conditions. Using civility metrics in conjunction with diversity in content selection may be more effective. However, diversity-based interventions have not been tested at scale and may not work in the diverse and dynamic contexts of real platforms. Instead, intervening in platform polarization dynamics will likely require continuous monitoring of polarization metrics, such as the widely used "feeling thermometer." These metrics can be used to evaluate product features, and potentially engineered as algorithmic objectives. It may further prove necessary to include polarization measures in the objective functions of recommender algorithms to prevent optimization processes from creating conflict as a side effect.
[ { "created": "Sun, 11 Jul 2021 03:23:42 GMT", "version": "v1" } ]
2021-07-13
[ [ "Stray", "Jonathan", "" ] ]
Polarization is implicated in the erosion of democracy and the progression to violence, which makes the polarization properties of large algorithmic content selection systems (recommender systems) a matter of concern for peace and security. While algorithm-driven social media does not seem to be a primary driver of polarization at the country level, it could be a useful intervention point in polarized societies. This paper examines algorithmic depolarization interventions with the goal of conflict transformation: not suppressing or eliminating conflict but moving towards more constructive conflict. Algorithmic intervention is considered at three stages: which content is available (moderation), how content is selected and personalized (ranking), and content presentation and controls (user interface). Empirical studies of online conflict suggest that the exposure diversity intervention proposed as an antidote to "filter bubbles" can be improved and can even worsen polarization under some conditions. Using civility metrics in conjunction with diversity in content selection may be more effective. However, diversity-based interventions have not been tested at scale and may not work in the diverse and dynamic contexts of real platforms. Instead, intervening in platform polarization dynamics will likely require continuous monitoring of polarization metrics, such as the widely used "feeling thermometer." These metrics can be used to evaluate product features, and potentially engineered as algorithmic objectives. It may further prove necessary to include polarization measures in the objective functions of recommender algorithms to prevent optimization processes from creating conflict as a side effect.
1904.02228
Peter Potash
Peter Potash
The Effect of Downstream Classification Tasks for Evaluating Sentence Embeddings
5 pages
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
One popular method for quantitatively evaluating the utility of sentence embeddings involves using them in downstream language processing tasks that require sentence representations as input. One simple such task is classification, where the sentence representations are used to train and test models on several classification datasets. We argue that by evaluating sentence representations in such a manner, the goal of the representations becomes learning a low-dimensional factorization of a sentence-task label matrix. We show how characteristics of this matrix can affect the ability for a low-dimensional factorization to perform as sentence representations in a suite of classification tasks. Primarily, sentences that have more labels across all possible classification tasks have a higher reconstruction loss, however the general nature of this effect is ultimately dependent on the overall distribution of labels across all possible sentences.
[ { "created": "Wed, 3 Apr 2019 20:12:10 GMT", "version": "v1" }, { "created": "Mon, 27 May 2019 14:10:45 GMT", "version": "v2" } ]
2019-05-28
[ [ "Potash", "Peter", "" ] ]
One popular method for quantitatively evaluating the utility of sentence embeddings involves using them in downstream language processing tasks that require sentence representations as input. One simple such task is classification, where the sentence representations are used to train and test models on several classification datasets. We argue that by evaluating sentence representations in such a manner, the goal of the representations becomes learning a low-dimensional factorization of a sentence-task label matrix. We show how characteristics of this matrix can affect the ability for a low-dimensional factorization to perform as sentence representations in a suite of classification tasks. Primarily, sentences that have more labels across all possible classification tasks have a higher reconstruction loss, however the general nature of this effect is ultimately dependent on the overall distribution of labels across all possible sentences.
1209.1738
Lukasz Kaiser
Diana Fischer (RWTH Aachen), Lukasz Kaiser (CNRS and LIAFA, Universite Paris Diderot)
Model Checking the Quantitative mu-Calculus on Linear Hybrid Systems
LMCS submission
Logical Methods in Computer Science, Volume 8, Issue 3 (September 20, 2012) lmcs:760
10.2168/LMCS-8(3:21)2012
null
cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the model-checking problem for a quantitative extension of the modal mu-calculus on a class of hybrid systems. Qualitative model checking has been proved decidable and implemented for several classes of systems, but this is not the case for quantitative questions that arise naturally in this context. Recently, quantitative formalisms that subsume classical temporal logics and allow the measurement of interesting quantitative phenomena were introduced. We show how a powerful quantitative logic, the quantitative mu-calculus, can be model checked with arbitrary precision on initialised linear hybrid systems. To this end, we develop new techniques for the discretisation of continuous state spaces based on a special class of strategies in model-checking games and present a reduction to a class of counter parity games.
[ { "created": "Sat, 8 Sep 2012 18:22:30 GMT", "version": "v1" }, { "created": "Wed, 19 Sep 2012 09:08:44 GMT", "version": "v2" } ]
2015-07-01
[ [ "Fischer", "Diana", "", "RWTH Aachen" ], [ "Kaiser", "Lukasz", "", "CNRS and LIAFA, Universite\n Paris Diderot" ] ]
We study the model-checking problem for a quantitative extension of the modal mu-calculus on a class of hybrid systems. Qualitative model checking has been proved decidable and implemented for several classes of systems, but this is not the case for quantitative questions that arise naturally in this context. Recently, quantitative formalisms that subsume classical temporal logics and allow the measurement of interesting quantitative phenomena were introduced. We show how a powerful quantitative logic, the quantitative mu-calculus, can be model checked with arbitrary precision on initialised linear hybrid systems. To this end, we develop new techniques for the discretisation of continuous state spaces based on a special class of strategies in model-checking games and present a reduction to a class of counter parity games.
2307.14541
Cristina Gena
Davide D'Adamo, Emiliano Robert, Cristina Gena, Silvestro Roatta
Novel BCI paradigm for ALS patients based on EEG and Pupillary Accommodative Response
null
null
null
null
cs.HC
http://creativecommons.org/licenses/by/4.0/
Brain-computer interfaces (BCIs) are one of the few alternatives to enable locked-in syndrome (LIS) patients to communicate with the external world, while they are the only solution for complete locked-in syndrome (CLIS) patients, who lost the ability to control eye movements. However, successful usage of endogenous electroencephalogram(EEG)-based BCI applications is often not trivial, due to EEG variations between and within sessions and long user training required. In this work we suggest an approach to deal with this two main limitations of EEG-BCIs by inserting a progressive and expandable neurofeedback training program, able to continuously tailor the classifier to the specific user, into a multimodal BCI paradigm. We propose indeed the integration of EEG with a non-brain signal: the pupillary accommodative response (PAR). The PAR is a change in pupil size associated with gaze shifts from far to close targets; it is not governed by the somatic nervous system and is thus potentially preserved after the evolution from LIS to CLIS, which often occurs in neurodegenerative diseases, such as amyotrophic lateral sclerosis. Multimodal BCIs have been broadly investigated in literature, due to their ability to yield better overall control performances, but this would be the first attempt combining EEG and PAR. In the context of the BciPar4Sla, we are exploiting these two signals, with the aim of developing a more reliable BCI, adaptive to the extent of evolving together with the user's ability to elicit the brain phenomena needed for optimal control, and providing support even in the transition from LIS to CLIS.
[ { "created": "Wed, 26 Jul 2023 23:15:50 GMT", "version": "v1" } ]
2023-07-28
[ [ "D'Adamo", "Davide", "" ], [ "Robert", "Emiliano", "" ], [ "Gena", "Cristina", "" ], [ "Roatta", "Silvestro", "" ] ]
Brain-computer interfaces (BCIs) are one of the few alternatives to enable locked-in syndrome (LIS) patients to communicate with the external world, while they are the only solution for complete locked-in syndrome (CLIS) patients, who lost the ability to control eye movements. However, successful usage of endogenous electroencephalogram(EEG)-based BCI applications is often not trivial, due to EEG variations between and within sessions and long user training required. In this work we suggest an approach to deal with this two main limitations of EEG-BCIs by inserting a progressive and expandable neurofeedback training program, able to continuously tailor the classifier to the specific user, into a multimodal BCI paradigm. We propose indeed the integration of EEG with a non-brain signal: the pupillary accommodative response (PAR). The PAR is a change in pupil size associated with gaze shifts from far to close targets; it is not governed by the somatic nervous system and is thus potentially preserved after the evolution from LIS to CLIS, which often occurs in neurodegenerative diseases, such as amyotrophic lateral sclerosis. Multimodal BCIs have been broadly investigated in literature, due to their ability to yield better overall control performances, but this would be the first attempt combining EEG and PAR. In the context of the BciPar4Sla, we are exploiting these two signals, with the aim of developing a more reliable BCI, adaptive to the extent of evolving together with the user's ability to elicit the brain phenomena needed for optimal control, and providing support even in the transition from LIS to CLIS.
2312.13462
Gunnar Kudrjavets
Gunnar Kudrjavets (University of Groningen), Aditya Kumar (Google), Jeff Thomas (Meta Platforms, Inc.), Ayushi Rastogi (University of Groningen)
What Do You Mean by Memory? When Engineers Are Lost in the Maze of Complexity
3 pages. To be published in the 46th International Conference on Software Engineering (ICSE 2024), April 14 - April 20 2024, Lisbon, Portugal
null
10.1145/3639477.3639735
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
An accepted practice to decrease applications' memory usage is to reduce the amount and frequency of memory allocations. Factors such as (a) the prevalence of out-of-memory (OOM) killers, (b) memory allocations in modern programming languages done implicitly, (c) overcommitting being a default strategy in the Linux kernel, and (d) the rise in complexity and terminology related to memory management makes the existing guidance inefficient. The industry needs detailed guidelines for optimizing memory usage targeting specific operating systems (OS) and programming language types.
[ { "created": "Wed, 20 Dec 2023 22:26:15 GMT", "version": "v1" } ]
2024-01-09
[ [ "Kudrjavets", "Gunnar", "", "University of Groningen" ], [ "Kumar", "Aditya", "", "Google" ], [ "Thomas", "Jeff", "", "Meta Platforms, Inc." ], [ "Rastogi", "Ayushi", "", "University of Groningen" ] ]
An accepted practice to decrease applications' memory usage is to reduce the amount and frequency of memory allocations. Factors such as (a) the prevalence of out-of-memory (OOM) killers, (b) memory allocations in modern programming languages done implicitly, (c) overcommitting being a default strategy in the Linux kernel, and (d) the rise in complexity and terminology related to memory management makes the existing guidance inefficient. The industry needs detailed guidelines for optimizing memory usage targeting specific operating systems (OS) and programming language types.
2310.00981
Xixi Lu
Bart J. Verhoef and Xixi Lu
Using Reinforcement Learning to Optimize Responses in Care Processes: A Case Study on Aggression Incidents
null
null
null
null
cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
Previous studies have used prescriptive process monitoring to find actionable policies in business processes and conducted case studies in similar domains, such as the loan application process and the traffic fine process. However, care processes tend to be more dynamic and complex. For example, at any stage of a care process, a multitude of actions is possible. In this paper, we follow the reinforcement approach and train a Markov decision process using event data from a care process. The goal was to find optimal policies for staff members when clients are displaying any type of aggressive behavior. We used the reinforcement learning algorithms Q-learning and SARSA to find optimal policies. Results showed that the policies derived from these algorithms are similar to the most frequent actions currently used but provide the staff members with a few more options in certain situations.
[ { "created": "Mon, 2 Oct 2023 08:43:29 GMT", "version": "v1" } ]
2023-10-03
[ [ "Verhoef", "Bart J.", "" ], [ "Lu", "Xixi", "" ] ]
Previous studies have used prescriptive process monitoring to find actionable policies in business processes and conducted case studies in similar domains, such as the loan application process and the traffic fine process. However, care processes tend to be more dynamic and complex. For example, at any stage of a care process, a multitude of actions is possible. In this paper, we follow the reinforcement approach and train a Markov decision process using event data from a care process. The goal was to find optimal policies for staff members when clients are displaying any type of aggressive behavior. We used the reinforcement learning algorithms Q-learning and SARSA to find optimal policies. Results showed that the policies derived from these algorithms are similar to the most frequent actions currently used but provide the staff members with a few more options in certain situations.
1512.00524
Marc Juarez
Marc Juarez, Mohsen Imani, Mike Perry, Claudia Diaz, Matthew Wright
Toward an Efficient Website Fingerprinting Defense
To appear In the proceedings of the European Symposium on Research in Computer Security (ESORICS), pp. 20, Springer, 2016
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Website Fingerprinting attacks enable a passive eavesdropper to recover the user's otherwise anonymized web browsing activity by matching the observed traffic with prerecorded web traffic templates. The defenses that have been proposed to counter these attacks are impractical for deployment in real-world systems due to their high cost in terms of added delay and bandwidth overhead. Further, these defenses have been designed to counter attacks that, despite their high success rates, have been criticized for assuming unrealistic attack conditions in the evaluation setting. In this paper, we propose a novel, lightweight defense based on Adaptive Padding that provides a sufficient level of security against website fingerprinting, particularly in realistic evaluation conditions. In a closed-world setting, this defense reduces the accuracy of the state-of-the-art attack from 91% to 20%, while introducing zero latency overhead and less than 60% bandwidth overhead. In an open-world, the attack precision is just 1% and drops further as the number of sites grows.
[ { "created": "Wed, 2 Dec 2015 00:14:16 GMT", "version": "v1" }, { "created": "Mon, 28 Mar 2016 19:25:56 GMT", "version": "v2" }, { "created": "Tue, 19 Jul 2016 10:18:51 GMT", "version": "v3" } ]
2016-07-20
[ [ "Juarez", "Marc", "" ], [ "Imani", "Mohsen", "" ], [ "Perry", "Mike", "" ], [ "Diaz", "Claudia", "" ], [ "Wright", "Matthew", "" ] ]
Website Fingerprinting attacks enable a passive eavesdropper to recover the user's otherwise anonymized web browsing activity by matching the observed traffic with prerecorded web traffic templates. The defenses that have been proposed to counter these attacks are impractical for deployment in real-world systems due to their high cost in terms of added delay and bandwidth overhead. Further, these defenses have been designed to counter attacks that, despite their high success rates, have been criticized for assuming unrealistic attack conditions in the evaluation setting. In this paper, we propose a novel, lightweight defense based on Adaptive Padding that provides a sufficient level of security against website fingerprinting, particularly in realistic evaluation conditions. In a closed-world setting, this defense reduces the accuracy of the state-of-the-art attack from 91% to 20%, while introducing zero latency overhead and less than 60% bandwidth overhead. In an open-world, the attack precision is just 1% and drops further as the number of sites grows.
1601.05900
Jarrod Moore
Margareta Ackerman and Jarrod Moore
When is Clustering Perturbation Robust?
null
null
null
null
cs.LG cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Clustering is a fundamental data mining tool that aims to divide data into groups of similar items. Generally, intuition about clustering reflects the ideal case -- exact data sets endowed with flawless dissimilarity between individual instances. In practice however, these cases are in the minority, and clustering applications are typically characterized by noisy data sets with approximate pairwise dissimilarities. As such, the efficacy of clustering methods in practical applications necessitates robustness to perturbations. In this paper, we perform a formal analysis of perturbation robustness, revealing that the extent to which algorithms can exhibit this desirable characteristic is inherently limited, and identifying the types of structures that allow popular clustering paradigms to discover meaningful clusters in spite of faulty data.
[ { "created": "Fri, 22 Jan 2016 08:01:58 GMT", "version": "v1" } ]
2016-01-25
[ [ "Ackerman", "Margareta", "" ], [ "Moore", "Jarrod", "" ] ]
Clustering is a fundamental data mining tool that aims to divide data into groups of similar items. Generally, intuition about clustering reflects the ideal case -- exact data sets endowed with flawless dissimilarity between individual instances. In practice however, these cases are in the minority, and clustering applications are typically characterized by noisy data sets with approximate pairwise dissimilarities. As such, the efficacy of clustering methods in practical applications necessitates robustness to perturbations. In this paper, we perform a formal analysis of perturbation robustness, revealing that the extent to which algorithms can exhibit this desirable characteristic is inherently limited, and identifying the types of structures that allow popular clustering paradigms to discover meaningful clusters in spite of faulty data.
2202.13990
Maxime Bombar
Maxime Bombar and Alain Couvreur and Thomas Debris-Alazard
On Codes and Learning With Errors over Function Fields
null
null
null
null
cs.CR math.NT
http://creativecommons.org/licenses/by/4.0/
It is a long standing open problem to find search to decision reductions for structured versions of the decoding problem of linear codes. Such results in the lattice-based setting have been carried out using number fields: Polynomial-LWE, Ring-LWE, Module-LWE and so on. We propose a function field version of the LWE problem. This new framework leads to another point of view on structured codes, e.g. quasi-cyclic codes, strengthening the connection between lattice-based and code-based cryptography. In particular, we obtain the first search to decision reduction for structured codes. Following the historical constructions in lattice-based cryptography, we instantiate our construction with function fields analogues of cyclotomic fields, namely Carlitz extensions, leading to search to decision reductions on various versions of Ring-LPN, which have applications to secure multi party computation and to an authentication protocol.
[ { "created": "Mon, 28 Feb 2022 17:43:59 GMT", "version": "v1" } ]
2022-03-01
[ [ "Bombar", "Maxime", "" ], [ "Couvreur", "Alain", "" ], [ "Debris-Alazard", "Thomas", "" ] ]
It is a long standing open problem to find search to decision reductions for structured versions of the decoding problem of linear codes. Such results in the lattice-based setting have been carried out using number fields: Polynomial-LWE, Ring-LWE, Module-LWE and so on. We propose a function field version of the LWE problem. This new framework leads to another point of view on structured codes, e.g. quasi-cyclic codes, strengthening the connection between lattice-based and code-based cryptography. In particular, we obtain the first search to decision reduction for structured codes. Following the historical constructions in lattice-based cryptography, we instantiate our construction with function fields analogues of cyclotomic fields, namely Carlitz extensions, leading to search to decision reductions on various versions of Ring-LPN, which have applications to secure multi party computation and to an authentication protocol.
2403.08161
Zhonglin Sun
Zhonglin Sun, Chen Feng, Ioannis Patras, Georgios Tzimiropoulos
LAFS: Landmark-based Facial Self-supervised Learning for Face Recognition
accepted to CVPR 2024
null
null
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work we focus on learning facial representations that can be adapted to train effective face recognition models, particularly in the absence of labels. Firstly, compared with existing labelled face datasets, a vastly larger magnitude of unlabeled faces exists in the real world. We explore the learning strategy of these unlabeled facial images through self-supervised pretraining to transfer generalized face recognition performance. Moreover, motivated by one recent finding, that is, the face saliency area is critical for face recognition, in contrast to utilizing random cropped blocks of images for constructing augmentations in pretraining, we utilize patches localized by extracted facial landmarks. This enables our method - namely LAndmark-based Facial Self-supervised learning LAFS), to learn key representation that is more critical for face recognition. We also incorporate two landmark-specific augmentations which introduce more diversity of landmark information to further regularize the learning. With learned landmark-based facial representations, we further adapt the representation for face recognition with regularization mitigating variations in landmark positions. Our method achieves significant improvement over the state-of-the-art on multiple face recognition benchmarks, especially on more challenging few-shot scenarios.
[ { "created": "Wed, 13 Mar 2024 01:07:55 GMT", "version": "v1" } ]
2024-03-14
[ [ "Sun", "Zhonglin", "" ], [ "Feng", "Chen", "" ], [ "Patras", "Ioannis", "" ], [ "Tzimiropoulos", "Georgios", "" ] ]
In this work we focus on learning facial representations that can be adapted to train effective face recognition models, particularly in the absence of labels. Firstly, compared with existing labelled face datasets, a vastly larger magnitude of unlabeled faces exists in the real world. We explore the learning strategy of these unlabeled facial images through self-supervised pretraining to transfer generalized face recognition performance. Moreover, motivated by one recent finding, that is, the face saliency area is critical for face recognition, in contrast to utilizing random cropped blocks of images for constructing augmentations in pretraining, we utilize patches localized by extracted facial landmarks. This enables our method - namely LAndmark-based Facial Self-supervised learning LAFS), to learn key representation that is more critical for face recognition. We also incorporate two landmark-specific augmentations which introduce more diversity of landmark information to further regularize the learning. With learned landmark-based facial representations, we further adapt the representation for face recognition with regularization mitigating variations in landmark positions. Our method achieves significant improvement over the state-of-the-art on multiple face recognition benchmarks, especially on more challenging few-shot scenarios.
2210.01266
Benjamin Th\'erien
Benjamin Th\'erien and Krzysztof Czarnecki
Interpretable Deep Tracking
null
null
null
null
cs.CV cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Imagine experiencing a crash as the passenger of an autonomous vehicle. Wouldn't you want to know why it happened? Current end-to-end optimizable deep neural networks (DNNs) in 3D detection, multi-object tracking, and motion forecasting provide little to no explanations about how they make their decisions. To help bridge this gap, we design an end-to-end optimizable multi-object tracking architecture and training protocol inspired by the recently proposed method of interchange intervention training (IIT). By enumerating different tracking decisions and associated reasoning procedures, we can train individual networks to reason about the possible decisions via IIT. Each network's decisions can be explained by the high-level structural causal model (SCM) it is trained in alignment with. Moreover, our proposed model learns to rank these outcomes, leveraging the promise of deep learning in end-to-end training, while being inherently interpretable.
[ { "created": "Mon, 3 Oct 2022 23:15:13 GMT", "version": "v1" } ]
2022-10-05
[ [ "Thérien", "Benjamin", "" ], [ "Czarnecki", "Krzysztof", "" ] ]
Imagine experiencing a crash as the passenger of an autonomous vehicle. Wouldn't you want to know why it happened? Current end-to-end optimizable deep neural networks (DNNs) in 3D detection, multi-object tracking, and motion forecasting provide little to no explanations about how they make their decisions. To help bridge this gap, we design an end-to-end optimizable multi-object tracking architecture and training protocol inspired by the recently proposed method of interchange intervention training (IIT). By enumerating different tracking decisions and associated reasoning procedures, we can train individual networks to reason about the possible decisions via IIT. Each network's decisions can be explained by the high-level structural causal model (SCM) it is trained in alignment with. Moreover, our proposed model learns to rank these outcomes, leveraging the promise of deep learning in end-to-end training, while being inherently interpretable.
2310.02550
Wanli Ni
Jianyang Ren, Wanli Ni, Hui Tian, Gaofeng Nie
Convergence Analysis and Latency Minimization for Semi-Federated Learning in Massive IoT Networks
This paper has been accepted by IEEE Transactions on Green Communications and Networking
null
10.1109/TGCN.2023.3309657
null
cs.IT eess.SP math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
As the number of sensors becomes massive in Internet of Things (IoT) networks, the amount of data is humongous. To process data in real-time while protecting user privacy, federated learning (FL) has been regarded as an enabling technique to push edge intelligence into IoT networks with massive devices. However, FL latency increases dramatically due to the increase of the number of parameters in deep neural network and the limited computation and communication capabilities of IoT devices. To address this issue, we propose a semi-federated learning (SemiFL) paradigm in which network pruning and over-the-air computation are efficiently applied. To be specific, each small base station collects the raw data from its served sensors and trains its local pruned model. After that, the global aggregation of local gradients is achieved through over-the-air computation. We first analyze the performance of the proposed SemiFL by deriving its convergence upper bound. To reduce latency, a convergence-constrained SemiFL latency minimization problem is formulated. By decoupling the original problem into several sub-problems, iterative algorithms are designed to solve them efficiently. Finally, numerical simulations are conducted to verify the effectiveness of our proposed scheme in reducing latency and guaranteeing the identification accuracy.
[ { "created": "Wed, 4 Oct 2023 03:18:29 GMT", "version": "v1" } ]
2023-10-05
[ [ "Ren", "Jianyang", "" ], [ "Ni", "Wanli", "" ], [ "Tian", "Hui", "" ], [ "Nie", "Gaofeng", "" ] ]
As the number of sensors becomes massive in Internet of Things (IoT) networks, the amount of data is humongous. To process data in real-time while protecting user privacy, federated learning (FL) has been regarded as an enabling technique to push edge intelligence into IoT networks with massive devices. However, FL latency increases dramatically due to the increase of the number of parameters in deep neural network and the limited computation and communication capabilities of IoT devices. To address this issue, we propose a semi-federated learning (SemiFL) paradigm in which network pruning and over-the-air computation are efficiently applied. To be specific, each small base station collects the raw data from its served sensors and trains its local pruned model. After that, the global aggregation of local gradients is achieved through over-the-air computation. We first analyze the performance of the proposed SemiFL by deriving its convergence upper bound. To reduce latency, a convergence-constrained SemiFL latency minimization problem is formulated. By decoupling the original problem into several sub-problems, iterative algorithms are designed to solve them efficiently. Finally, numerical simulations are conducted to verify the effectiveness of our proposed scheme in reducing latency and guaranteeing the identification accuracy.
1004.3258
Vishal Goyal
A. Mosavi
Multiple Criteria Decision-Making Preprocessing Using Data Mining Tools
International Journal of Computer Science Issues at http://ijcsi.org/articles/Multiple-Criteria-Decision-Making-Preprocessing-Using-Data-Mining-Tools.php
IJCSI, Volume 7, Issue 2, March 2010
null
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Real-life engineering optimization problems need Multiobjective Optimization (MOO) tools. These problems are highly nonlinear. As the process of Multiple Criteria Decision-Making (MCDM) is much expanded most MOO problems in different disciplines can be classified on the basis of it. Thus MCDM methods have gained wide popularity in different sciences and applications. Meanwhile the increasing number of involved components, variables, parameters, constraints and objectives in the process, has made the process very complicated. However the new generation of MOO tools has made the optimization process more automated, but still initializing the process and setting the initial value of simulation tools and also identifying the effective input variables and objectives in order to reach the smaller design space are still complicated. In this situation adding a preprocessing step into the MCDM procedure could make a huge difference in terms of organizing the input variables according to their effects on the optimization objectives of the system. The aim of this paper is to introduce the classification task of data mining as an effective option for identifying the most effective variables of the MCDM systems. To evaluate the effectiveness of the proposed method an example has been given for 3D wing design.
[ { "created": "Mon, 19 Apr 2010 17:53:38 GMT", "version": "v1" } ]
2010-04-20
[ [ "Mosavi", "A.", "" ] ]
Real-life engineering optimization problems need Multiobjective Optimization (MOO) tools. These problems are highly nonlinear. As the process of Multiple Criteria Decision-Making (MCDM) is much expanded most MOO problems in different disciplines can be classified on the basis of it. Thus MCDM methods have gained wide popularity in different sciences and applications. Meanwhile the increasing number of involved components, variables, parameters, constraints and objectives in the process, has made the process very complicated. However the new generation of MOO tools has made the optimization process more automated, but still initializing the process and setting the initial value of simulation tools and also identifying the effective input variables and objectives in order to reach the smaller design space are still complicated. In this situation adding a preprocessing step into the MCDM procedure could make a huge difference in terms of organizing the input variables according to their effects on the optimization objectives of the system. The aim of this paper is to introduce the classification task of data mining as an effective option for identifying the most effective variables of the MCDM systems. To evaluate the effectiveness of the proposed method an example has been given for 3D wing design.
2011.00718
Song Fang
Song Fang and Quanyan Zhu
Fundamental Limits of Obfuscation for Linear Gaussian Dynamical Systems: An Information-Theoretic Approach
arXiv admin note: text overlap with arXiv:2008.04893
null
null
null
cs.IT cs.CR cs.LG cs.SY eess.SP eess.SY math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we study the fundamental limits of obfuscation in terms of privacy-distortion tradeoffs for linear Gaussian dynamical systems via an information-theoretic approach. Particularly, we obtain analytical formulas that capture the fundamental privacy-distortion tradeoffs when privacy masks are to be added to the outputs of the dynamical systems, while indicating explicitly how to design the privacy masks in an optimal way: The privacy masks should be colored Gaussian with power spectra shaped specifically based upon the system and noise properties.
[ { "created": "Thu, 29 Oct 2020 20:05:50 GMT", "version": "v1" } ]
2020-11-03
[ [ "Fang", "Song", "" ], [ "Zhu", "Quanyan", "" ] ]
In this paper, we study the fundamental limits of obfuscation in terms of privacy-distortion tradeoffs for linear Gaussian dynamical systems via an information-theoretic approach. Particularly, we obtain analytical formulas that capture the fundamental privacy-distortion tradeoffs when privacy masks are to be added to the outputs of the dynamical systems, while indicating explicitly how to design the privacy masks in an optimal way: The privacy masks should be colored Gaussian with power spectra shaped specifically based upon the system and noise properties.
2112.10065
Seo Jin Park
Seo Jin Park, Joshua Fried, Sunghyun Kim, Mohammad Alizadeh, Adam Belay
Efficient Strong Scaling Through Burst Parallel Training
MLSys'22
null
null
null
cs.DC cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
As emerging deep neural network (DNN) models continue to grow in size, using large GPU clusters to train DNNs is becoming an essential requirement to achieving acceptable training times. In this paper, we consider the case where future increases in cluster size will cause the global batch size that can be used to train models to reach a fundamental limit: beyond a certain point, larger global batch sizes cause sample efficiency to degrade, increasing overall time to accuracy. As a result, to achieve further improvements in training performance, we must instead consider "strong scaling" strategies that hold the global batch size constant and allocate smaller batches to each GPU. Unfortunately, this makes it significantly more difficult to use cluster resources efficiently. We present DeepPool, a system that addresses this efficiency challenge through two key ideas. First, burst parallelism allocates large numbers of GPUs to foreground jobs in bursts to exploit the unevenness in parallelism across layers. Second, GPU multiplexing prioritizes throughput for foreground training jobs, while packing in background training jobs to reclaim underutilized GPU resources, thereby improving cluster-wide utilization. Together, these two ideas enable DeepPool to deliver a 1.2 - 2.3x improvement in total cluster throughput over standard data parallelism with a single task when the cluster scale is large.
[ { "created": "Sun, 19 Dec 2021 05:18:39 GMT", "version": "v1" }, { "created": "Sat, 19 Mar 2022 01:54:37 GMT", "version": "v2" }, { "created": "Mon, 23 May 2022 20:51:22 GMT", "version": "v3" } ]
2022-05-25
[ [ "Park", "Seo Jin", "" ], [ "Fried", "Joshua", "" ], [ "Kim", "Sunghyun", "" ], [ "Alizadeh", "Mohammad", "" ], [ "Belay", "Adam", "" ] ]
As emerging deep neural network (DNN) models continue to grow in size, using large GPU clusters to train DNNs is becoming an essential requirement to achieving acceptable training times. In this paper, we consider the case where future increases in cluster size will cause the global batch size that can be used to train models to reach a fundamental limit: beyond a certain point, larger global batch sizes cause sample efficiency to degrade, increasing overall time to accuracy. As a result, to achieve further improvements in training performance, we must instead consider "strong scaling" strategies that hold the global batch size constant and allocate smaller batches to each GPU. Unfortunately, this makes it significantly more difficult to use cluster resources efficiently. We present DeepPool, a system that addresses this efficiency challenge through two key ideas. First, burst parallelism allocates large numbers of GPUs to foreground jobs in bursts to exploit the unevenness in parallelism across layers. Second, GPU multiplexing prioritizes throughput for foreground training jobs, while packing in background training jobs to reclaim underutilized GPU resources, thereby improving cluster-wide utilization. Together, these two ideas enable DeepPool to deliver a 1.2 - 2.3x improvement in total cluster throughput over standard data parallelism with a single task when the cluster scale is large.
2302.06433
Emadeldeen Eldele
Emadeldeen Eldele, Mohamed Ragab, Zhenghua Chen, Min Wu, Chee-Keong Kwoh, Xiaoli Li
Label-efficient Time Series Representation Learning: A Review
Accepted in the IEEE Transactions on Artificial Intelligence (TAI) https://ieeexplore.ieee.org/document/10601520
null
10.1109/TAI.2024.3430236
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
Label-efficient time series representation learning, which aims to learn effective representations with limited labeled data, is crucial for deploying deep learning models in real-world applications. To address the scarcity of labeled time series data, various strategies, e.g., transfer learning, self-supervised learning, and semi-supervised learning, have been developed. In this survey, we introduce a novel taxonomy for the first time, categorizing existing approaches as in-domain or cross-domain, based on their reliance on external data sources or not. Furthermore, we present a review of the recent advances in each strategy, conclude the limitations of current methodologies, and suggest future research directions that promise further improvements in the field.
[ { "created": "Mon, 13 Feb 2023 15:12:15 GMT", "version": "v1" }, { "created": "Tue, 15 Aug 2023 05:09:29 GMT", "version": "v2" }, { "created": "Mon, 26 Feb 2024 03:27:46 GMT", "version": "v3" }, { "created": "Wed, 24 Jul 2024 03:43:32 GMT", "version": "v4" } ]
2024-07-25
[ [ "Eldele", "Emadeldeen", "" ], [ "Ragab", "Mohamed", "" ], [ "Chen", "Zhenghua", "" ], [ "Wu", "Min", "" ], [ "Kwoh", "Chee-Keong", "" ], [ "Li", "Xiaoli", "" ] ]
Label-efficient time series representation learning, which aims to learn effective representations with limited labeled data, is crucial for deploying deep learning models in real-world applications. To address the scarcity of labeled time series data, various strategies, e.g., transfer learning, self-supervised learning, and semi-supervised learning, have been developed. In this survey, we introduce a novel taxonomy for the first time, categorizing existing approaches as in-domain or cross-domain, based on their reliance on external data sources or not. Furthermore, we present a review of the recent advances in each strategy, conclude the limitations of current methodologies, and suggest future research directions that promise further improvements in the field.
1808.09406
Dominik Peters
Vittorio Bil\`o, Ioannis Caragiannis, Michele Flammini, Ayumi Igarashi, Gianpiero Monaco, Dominik Peters, Cosimo Vinci, William S. Zwicker
Almost Envy-Free Allocations with Connected Bundles
Accepted journal version
Games and Economic Behavior, 131:197-221, 2022
10.1016/j.geb.2021.11.006
null
cs.GT econ.TH
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the existence of allocations of indivisible goods that are envy-free up to one good (EF1), under the additional constraint that each bundle needs to be connected in an underlying item graph. If the graph is a path and the utility functions are monotonic over bundles, we show the existence of EF1 allocations for at most four agents, and the existence of EF2 allocations for any number of agents; our proofs involve discrete analogues of the Stromquist's moving-knife protocol and the Su--Simmons argument based on Sperner's lemma. For identical utilities, we provide a polynomial-time algorithm that computes an EF1 allocation for any number of agents. For the case of two agents, we characterize the class of graphs that guarantee the existence of EF1 allocations as those whose biconnected components are arranged in a path; this property can be checked in linear time.
[ { "created": "Tue, 28 Aug 2018 16:57:17 GMT", "version": "v1" }, { "created": "Fri, 20 May 2022 16:09:32 GMT", "version": "v2" } ]
2022-05-23
[ [ "Bilò", "Vittorio", "" ], [ "Caragiannis", "Ioannis", "" ], [ "Flammini", "Michele", "" ], [ "Igarashi", "Ayumi", "" ], [ "Monaco", "Gianpiero", "" ], [ "Peters", "Dominik", "" ], [ "Vinci", "Cosimo", "" ], [ "Zwicker", "William S.", "" ] ]
We study the existence of allocations of indivisible goods that are envy-free up to one good (EF1), under the additional constraint that each bundle needs to be connected in an underlying item graph. If the graph is a path and the utility functions are monotonic over bundles, we show the existence of EF1 allocations for at most four agents, and the existence of EF2 allocations for any number of agents; our proofs involve discrete analogues of the Stromquist's moving-knife protocol and the Su--Simmons argument based on Sperner's lemma. For identical utilities, we provide a polynomial-time algorithm that computes an EF1 allocation for any number of agents. For the case of two agents, we characterize the class of graphs that guarantee the existence of EF1 allocations as those whose biconnected components are arranged in a path; this property can be checked in linear time.
1410.5010
Georg Hager
Holger Stengel, Jan Treibig, Georg Hager, Gerhard Wellein
Quantifying performance bottlenecks of stencil computations using the Execution-Cache-Memory model
10 pages, 8 figures. Added Roofline comparison and other minor improvements
null
10.1145/2751205.2751240
null
cs.PF cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Stencil algorithms on regular lattices appear in many fields of computational science, and much effort has been put into optimized implementations. Such activities are usually not guided by performance models that provide estimates of expected speedup. Understanding the performance properties and bottlenecks by performance modeling enables a clear view on promising optimization opportunities. In this work we refine the recently developed Execution-Cache-Memory (ECM) model and use it to quantify the performance bottlenecks of stencil algorithms on a contemporary Intel processor. This includes applying the model to arrive at single-core performance and scalability predictions for typical corner case stencil loop kernels. Guided by the ECM model we accurately quantify the significance of "layer conditions," which are required to estimate the data traffic through the memory hierarchy, and study the impact of typical optimization approaches such as spatial blocking, strength reduction, and temporal blocking for their expected benefits. We also compare the ECM model to the widely known Roofline model.
[ { "created": "Sat, 18 Oct 2014 21:49:45 GMT", "version": "v1" }, { "created": "Sat, 17 Jan 2015 14:07:26 GMT", "version": "v2" } ]
2016-01-28
[ [ "Stengel", "Holger", "" ], [ "Treibig", "Jan", "" ], [ "Hager", "Georg", "" ], [ "Wellein", "Gerhard", "" ] ]
Stencil algorithms on regular lattices appear in many fields of computational science, and much effort has been put into optimized implementations. Such activities are usually not guided by performance models that provide estimates of expected speedup. Understanding the performance properties and bottlenecks by performance modeling enables a clear view on promising optimization opportunities. In this work we refine the recently developed Execution-Cache-Memory (ECM) model and use it to quantify the performance bottlenecks of stencil algorithms on a contemporary Intel processor. This includes applying the model to arrive at single-core performance and scalability predictions for typical corner case stencil loop kernels. Guided by the ECM model we accurately quantify the significance of "layer conditions," which are required to estimate the data traffic through the memory hierarchy, and study the impact of typical optimization approaches such as spatial blocking, strength reduction, and temporal blocking for their expected benefits. We also compare the ECM model to the widely known Roofline model.
2209.10948
Robin Zbinden
Robin Zbinden
Implementing and Experimenting with Diffusion Models for Text-to-Image Generation
Master's Thesis
null
null
null
cs.CV cs.LG
http://creativecommons.org/licenses/by/4.0/
Taking advantage of the many recent advances in deep learning, text-to-image generative models currently have the merit of attracting the general public attention. Two of these models, DALL-E 2 and Imagen, have demonstrated that highly photorealistic images could be generated from a simple textual description of an image. Based on a novel approach for image generation called diffusion models, text-to-image models enable the production of many different types of high resolution images, where human imagination is the only limit. However, these models require exceptionally large amounts of computational resources to train, as well as handling huge datasets collected from the internet. In addition, neither the codebase nor the models have been released. It consequently prevents the AI community from experimenting with these cutting-edge models, making the reproduction of their results complicated, if not impossible. In this thesis, we aim to contribute by firstly reviewing the different approaches and techniques used by these models, and then by proposing our own implementation of a text-to-image model. Highly based on DALL-E 2, we introduce several slight modifications to tackle the high computational cost induced. We thus have the opportunity to experiment in order to understand what these models are capable of, especially in a low resource regime. In particular, we provide additional and analyses deeper than the ones performed by the authors of DALL-E 2, including ablation studies. Besides, diffusion models use so-called guidance methods to help the generating process. We introduce a new guidance method which can be used in conjunction with other guidance methods to improve image quality. Finally, the images generated by our model are of reasonably good quality, without having to sustain the significant training costs of state-of-the-art text-to-image models.
[ { "created": "Thu, 22 Sep 2022 12:03:33 GMT", "version": "v1" } ]
2022-09-23
[ [ "Zbinden", "Robin", "" ] ]
Taking advantage of the many recent advances in deep learning, text-to-image generative models currently have the merit of attracting the general public attention. Two of these models, DALL-E 2 and Imagen, have demonstrated that highly photorealistic images could be generated from a simple textual description of an image. Based on a novel approach for image generation called diffusion models, text-to-image models enable the production of many different types of high resolution images, where human imagination is the only limit. However, these models require exceptionally large amounts of computational resources to train, as well as handling huge datasets collected from the internet. In addition, neither the codebase nor the models have been released. It consequently prevents the AI community from experimenting with these cutting-edge models, making the reproduction of their results complicated, if not impossible. In this thesis, we aim to contribute by firstly reviewing the different approaches and techniques used by these models, and then by proposing our own implementation of a text-to-image model. Highly based on DALL-E 2, we introduce several slight modifications to tackle the high computational cost induced. We thus have the opportunity to experiment in order to understand what these models are capable of, especially in a low resource regime. In particular, we provide additional and analyses deeper than the ones performed by the authors of DALL-E 2, including ablation studies. Besides, diffusion models use so-called guidance methods to help the generating process. We introduce a new guidance method which can be used in conjunction with other guidance methods to improve image quality. Finally, the images generated by our model are of reasonably good quality, without having to sustain the significant training costs of state-of-the-art text-to-image models.
1506.07077
Carmelo Cascone
Carmelo Cascone, Luca Pollini, Davide Sanvito, Antonio Capone
Traffic Management Applications for Stateful SDN Data Plane
6 pages, 9 figures
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The successful OpenFlow approach to Software Defined Networking (SDN) allows network programmability through a central controller able to orchestrate a set of dumb switches. However, the simple match/action abstraction of OpenFlow switches constrains the evolution of the forwarding rules to be fully managed by the controller. This can be particularly limiting for a number of applications that are affected by the delay of the slow control path, like traffic management applications. Some recent proposals are pushing toward an evolution of the OpenFlow abstraction to enable the evolution of forwarding policies directly in the data plane based on state machines and local events. In this paper, we present two traffic management applications that exploit a stateful data plane and their prototype implementation based on OpenState, an OpenFlow evolution that we recently proposed.
[ { "created": "Tue, 23 Jun 2015 16:22:48 GMT", "version": "v1" }, { "created": "Mon, 31 Aug 2015 16:38:42 GMT", "version": "v2" } ]
2015-09-01
[ [ "Cascone", "Carmelo", "" ], [ "Pollini", "Luca", "" ], [ "Sanvito", "Davide", "" ], [ "Capone", "Antonio", "" ] ]
The successful OpenFlow approach to Software Defined Networking (SDN) allows network programmability through a central controller able to orchestrate a set of dumb switches. However, the simple match/action abstraction of OpenFlow switches constrains the evolution of the forwarding rules to be fully managed by the controller. This can be particularly limiting for a number of applications that are affected by the delay of the slow control path, like traffic management applications. Some recent proposals are pushing toward an evolution of the OpenFlow abstraction to enable the evolution of forwarding policies directly in the data plane based on state machines and local events. In this paper, we present two traffic management applications that exploit a stateful data plane and their prototype implementation based on OpenState, an OpenFlow evolution that we recently proposed.
1903.02938
Farhad Farzbod
Farhad Farzbod, Onome E. Scott-Emuakpor
Force Analysis for Interactions beyond the Closest Neighbor in a Periodic Structure
null
null
null
null
cs.CE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Periodic structures are a type of metamaterials in which their physical properties not only depend on the unit cell materials but also the way unit cells are arranged and interact with each other. Periodic structure have Interesting wave propagation properties making them suitable materials for acoustic filters and wave beaming devices. Bloch analysis is the main tool to analyze wave propagation in these structures. In most if not all engineering structures, each unit cell is interacting with adjacent cells. As such methods developed for vibrational and wave propagation analysis of engineering periodic structures, address forces exerted by the closest neighbor only. Since metamaterials properties depend on the interactions of unit cell with neighboring cells, more interactions means more complex band structure. In this paper, we address force analysis when interactions are beyond the closest neighbor. This analysis, lays the foundation for vibrational analysis of structures in which interactions are not restricted to the closest neighbor.
[ { "created": "Tue, 5 Mar 2019 04:10:11 GMT", "version": "v1" } ]
2019-03-08
[ [ "Farzbod", "Farhad", "" ], [ "Scott-Emuakpor", "Onome E.", "" ] ]
Periodic structures are a type of metamaterials in which their physical properties not only depend on the unit cell materials but also the way unit cells are arranged and interact with each other. Periodic structure have Interesting wave propagation properties making them suitable materials for acoustic filters and wave beaming devices. Bloch analysis is the main tool to analyze wave propagation in these structures. In most if not all engineering structures, each unit cell is interacting with adjacent cells. As such methods developed for vibrational and wave propagation analysis of engineering periodic structures, address forces exerted by the closest neighbor only. Since metamaterials properties depend on the interactions of unit cell with neighboring cells, more interactions means more complex band structure. In this paper, we address force analysis when interactions are beyond the closest neighbor. This analysis, lays the foundation for vibrational analysis of structures in which interactions are not restricted to the closest neighbor.
2109.12703
Bailian Chen
Bailian Chen, Dylan R. Harp, Yingqi Zhang, Curtis M. Oldenburg, Rajesh J. Pawar
Dynamic Risk Assessment for Geologic CO2 Sequestration
28 pages, 9 figures
null
null
null
cs.IT cs.NA math.IT math.NA
http://creativecommons.org/licenses/by/4.0/
At a geologic CO2 sequestration (GCS) site, geologic uncertainty usually leads to large uncertainty in the predictions of properties that influence metrics for leakage risk assessment, such as CO2 saturations and pressures in potentially leaky wellbores, CO2/brine leakage rates, and leakage consequences such as changes in drinking water quality in groundwater aquifers. The large uncertainty in these risk-related system properties and risk metrics can lead to over-conservative risk management decisions to ensure safe operations of GCS sites. The objective of this work is to develop a novel approach based on dynamic risk assessment to effectively reduce the uncertainty in the predicted risk-related system properties and risk metrics. We demonstrate our framework for dynamic risk assessment on two case studies: a 3D synthetic example and a synthetic field example based on the Rock Springs Uplift (RSU) storage site in Wyoming, USA. Results show that the NRAP-Open-IAM risk assessment tool coupled with a conformance evaluation can be used to effectively quantify and reduce the uncertainty in the predictions of risk-related system properties and risk metrics in GCS.
[ { "created": "Sun, 26 Sep 2021 20:52:20 GMT", "version": "v1" } ]
2021-09-28
[ [ "Chen", "Bailian", "" ], [ "Harp", "Dylan R.", "" ], [ "Zhang", "Yingqi", "" ], [ "Oldenburg", "Curtis M.", "" ], [ "Pawar", "Rajesh J.", "" ] ]
At a geologic CO2 sequestration (GCS) site, geologic uncertainty usually leads to large uncertainty in the predictions of properties that influence metrics for leakage risk assessment, such as CO2 saturations and pressures in potentially leaky wellbores, CO2/brine leakage rates, and leakage consequences such as changes in drinking water quality in groundwater aquifers. The large uncertainty in these risk-related system properties and risk metrics can lead to over-conservative risk management decisions to ensure safe operations of GCS sites. The objective of this work is to develop a novel approach based on dynamic risk assessment to effectively reduce the uncertainty in the predicted risk-related system properties and risk metrics. We demonstrate our framework for dynamic risk assessment on two case studies: a 3D synthetic example and a synthetic field example based on the Rock Springs Uplift (RSU) storage site in Wyoming, USA. Results show that the NRAP-Open-IAM risk assessment tool coupled with a conformance evaluation can be used to effectively quantify and reduce the uncertainty in the predictions of risk-related system properties and risk metrics in GCS.
1212.6375
\'Oscar C. V\'asquez
Oscar C. V\'asquez
Energy in computing systems with speed scaling: optimization and mechanisms design
null
null
null
null
cs.GT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study a simple scheduling game for the speed scaling model. Players want their job to complete early, which however generates a big energy consumption. We address the game from the mechanism design side, and by charging the energy usage to the players we seek for a good compromize between quality of service and energy usage.
[ { "created": "Thu, 27 Dec 2012 14:11:28 GMT", "version": "v1" } ]
2013-01-01
[ [ "Vásquez", "Oscar C.", "" ] ]
We study a simple scheduling game for the speed scaling model. Players want their job to complete early, which however generates a big energy consumption. We address the game from the mechanism design side, and by charging the energy usage to the players we seek for a good compromize between quality of service and energy usage.
2408.01272
Xinhuan Shu
Xinhuan Shu, Alexis Pister, Junxiu Tang, Fanny Chevalier, Benjamin Bach
Does This Have a Particular Meaning? Interactive Pattern Explanation for Network Visualizations
to be published in IEEE VIS 2024
null
null
null
cs.HC
http://creativecommons.org/licenses/by/4.0/
This paper presents an interactive technique to explain visual patterns in network visualizations to analysts who do not understand these visualizations and who are learning to read them. Learning a visualization requires mastering its visual grammar and decoding information presented through visual marks, graphical encodings, and spatial configurations. To help people learn network visualization designs and extract meaningful information, we introduce the concept of interactive pattern explanation that allows viewers to select an arbitrary area in a visualization, then automatically mines the underlying data patterns, and explains both visual and data patterns present in the viewer's selection. In a qualitative and a quantitative user study with a total of 32 participants, we compare interactive pattern explanations to textual-only and visual-only (cheatsheets) explanations. Our results show that interactive explanations increase learning of i) unfamiliar visualizations, ii) patterns in network science, and iii) the respective network terminology.
[ { "created": "Fri, 2 Aug 2024 13:50:15 GMT", "version": "v1" } ]
2024-08-05
[ [ "Shu", "Xinhuan", "" ], [ "Pister", "Alexis", "" ], [ "Tang", "Junxiu", "" ], [ "Chevalier", "Fanny", "" ], [ "Bach", "Benjamin", "" ] ]
This paper presents an interactive technique to explain visual patterns in network visualizations to analysts who do not understand these visualizations and who are learning to read them. Learning a visualization requires mastering its visual grammar and decoding information presented through visual marks, graphical encodings, and spatial configurations. To help people learn network visualization designs and extract meaningful information, we introduce the concept of interactive pattern explanation that allows viewers to select an arbitrary area in a visualization, then automatically mines the underlying data patterns, and explains both visual and data patterns present in the viewer's selection. In a qualitative and a quantitative user study with a total of 32 participants, we compare interactive pattern explanations to textual-only and visual-only (cheatsheets) explanations. Our results show that interactive explanations increase learning of i) unfamiliar visualizations, ii) patterns in network science, and iii) the respective network terminology.
1609.09544
Shuchin Aeron
Josh Girson and Shuchin Aeron
Algorithms for item categorization based on ordinal ranking data
To appear in IEEE Allerton conference on computing, communications and control, 2016
null
null
null
cs.LG cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a new method for identifying the latent categorization of items based on their rankings. Complimenting a recent work that uses a Dirichlet prior on preference vectors and variational inference, we show that this problem can be effectively dealt with using existing community detection algorithms, with the communities corresponding to item categories. In particular we convert the bipartite ranking data to a unipartite graph of item affinities, and apply community detection algorithms. In this context we modify an existing algorithm - namely the label propagation algorithm to a variant that uses the distance between the nodes for weighting the label propagation - to identify the categories. We propose and analyze a synthetic ordinal ranking model and show its relation to the recently much studied stochastic block model. We test our algorithms on synthetic data and compare performance with several popular community detection algorithms. We also test the method on real data sets of movie categorization from the Movie Lens database. In all of the cases our algorithm is able to identify the categories for a suitable choice of tuning parameter.
[ { "created": "Thu, 29 Sep 2016 22:59:45 GMT", "version": "v1" } ]
2016-10-03
[ [ "Girson", "Josh", "" ], [ "Aeron", "Shuchin", "" ] ]
We present a new method for identifying the latent categorization of items based on their rankings. Complimenting a recent work that uses a Dirichlet prior on preference vectors and variational inference, we show that this problem can be effectively dealt with using existing community detection algorithms, with the communities corresponding to item categories. In particular we convert the bipartite ranking data to a unipartite graph of item affinities, and apply community detection algorithms. In this context we modify an existing algorithm - namely the label propagation algorithm to a variant that uses the distance between the nodes for weighting the label propagation - to identify the categories. We propose and analyze a synthetic ordinal ranking model and show its relation to the recently much studied stochastic block model. We test our algorithms on synthetic data and compare performance with several popular community detection algorithms. We also test the method on real data sets of movie categorization from the Movie Lens database. In all of the cases our algorithm is able to identify the categories for a suitable choice of tuning parameter.
1609.00483
Sheng Zhou
Weisi Guo, Sheng Zhou, Yunfei Chen, Siyi Wang, Xiaoli Chu, Zhisheng Niu
Simultaneous Information and Energy Flow for IoT Relay Systems with Crowd Harvesting
to appear in IEEE Communications Magazine
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
It is expected that the number of wireless devices will grow rapidly over the next few years due to the growing proliferation of Internet-of-Things (IoT). In order to improve the energy efficiency of information transfer between small devices, we review state-of-the-art research in simultaneous wireless energy and information transfer, especially for relay based IoT systems. In particular, we analyze simultaneous information-and-energy transfer from the source node, and the design of time-switching and power-splitting operation modes, as well as the associated optimization algorithms. We also investigate the potential of crowd energy harvesting from transmission nodes that belong to multiple radio networks. The combination of source and crowd energy harvesting can greatly reduce the use of battery power and increase the availability and reliability for relaying. We provide insight into the fundamental limits of crowd energy harvesting reliability based on a case study using real city data. Furthermore, we examine the optimization of transmissions in crowd harvesting, especially with the use of node collaboration while guaranteeing Quality-of-Service (QoS).
[ { "created": "Fri, 2 Sep 2016 07:10:32 GMT", "version": "v1" } ]
2016-09-05
[ [ "Guo", "Weisi", "" ], [ "Zhou", "Sheng", "" ], [ "Chen", "Yunfei", "" ], [ "Wang", "Siyi", "" ], [ "Chu", "Xiaoli", "" ], [ "Niu", "Zhisheng", "" ] ]
It is expected that the number of wireless devices will grow rapidly over the next few years due to the growing proliferation of Internet-of-Things (IoT). In order to improve the energy efficiency of information transfer between small devices, we review state-of-the-art research in simultaneous wireless energy and information transfer, especially for relay based IoT systems. In particular, we analyze simultaneous information-and-energy transfer from the source node, and the design of time-switching and power-splitting operation modes, as well as the associated optimization algorithms. We also investigate the potential of crowd energy harvesting from transmission nodes that belong to multiple radio networks. The combination of source and crowd energy harvesting can greatly reduce the use of battery power and increase the availability and reliability for relaying. We provide insight into the fundamental limits of crowd energy harvesting reliability based on a case study using real city data. Furthermore, we examine the optimization of transmissions in crowd harvesting, especially with the use of node collaboration while guaranteeing Quality-of-Service (QoS).
2209.08649
Himarsha R Jayanetti
Himarsha R. Jayanetti, Shawn M. Jones, Martin Klein, Alex Osbourne, Paul Koerbin, Michael L. Nelson, Michele C. Weigle
Creating Structure in Web Archives With Collections: Different Concepts From Web Archivists
5 figures, 16 pages, accepted for publication at TPDL 2022
null
null
null
cs.DL
http://creativecommons.org/licenses/by-nc-sa/4.0/
As web archives' holdings grow, archivists subdivide them into collections so they are easier to understand and manage. In this work, we review the collection structures of eight web archive platforms: : Archive-It, Conifer, the Croatian Web Archive (HAW), the Internet Archive's user account web archives, Library of Congress (LC), PANDORA, Trove, and the UK Web Archive (UKWA). We note a plethora of different approaches to web archive collection structures. Some web archive collections support sub-collections and some permit embargoes. Curatorial decisions may be attributed to a single organization or many. Archived web pages are known by many names: mementos, copies, captures, or snapshots. Some platforms restrict a memento to a single collection and others allow mementos to cross collections. Knowledge of collection structures has implications for many different applications and users. Visitors will need to understand how to navigate collections. Future archivists will need to understand what options are available for designing collections. Platform designers need it to know what possibilities exist. The developers of tools that consume collections need to understand collection structures so they can meet the needs of their users.
[ { "created": "Sun, 18 Sep 2022 20:31:25 GMT", "version": "v1" } ]
2022-09-20
[ [ "Jayanetti", "Himarsha R.", "" ], [ "Jones", "Shawn M.", "" ], [ "Klein", "Martin", "" ], [ "Osbourne", "Alex", "" ], [ "Koerbin", "Paul", "" ], [ "Nelson", "Michael L.", "" ], [ "Weigle", "Michele C.", "" ] ]
As web archives' holdings grow, archivists subdivide them into collections so they are easier to understand and manage. In this work, we review the collection structures of eight web archive platforms: : Archive-It, Conifer, the Croatian Web Archive (HAW), the Internet Archive's user account web archives, Library of Congress (LC), PANDORA, Trove, and the UK Web Archive (UKWA). We note a plethora of different approaches to web archive collection structures. Some web archive collections support sub-collections and some permit embargoes. Curatorial decisions may be attributed to a single organization or many. Archived web pages are known by many names: mementos, copies, captures, or snapshots. Some platforms restrict a memento to a single collection and others allow mementos to cross collections. Knowledge of collection structures has implications for many different applications and users. Visitors will need to understand how to navigate collections. Future archivists will need to understand what options are available for designing collections. Platform designers need it to know what possibilities exist. The developers of tools that consume collections need to understand collection structures so they can meet the needs of their users.
1912.12421
Ying Cui
Wei Xu, Ying Cui, Zhi Liu and Haoran Li
Optimal Multi-View Video Transmission in OFDMA Systems
to be appear in IEEE Communications Letters
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this letter, we study the transmission of a multi-view video (MVV) to multiple users in an Orthogonal Frequency Division Multiple Access (OFDMA) system. To maximally improve transmission efficiency, we exploit both natural multicast opportunities and view synthesis-enabled multicast opportunities. First, we establish a communication model for transmission of a MVV to multiple users in an OFDMA system. Then, we formulate the minimization problem of the average weighted sum energy consumption for view transmission and synthesis with respect to view selection and transmission power and subcarrier allocation. The optimization problem is a challenging mixed discrete-continuous optimization problem with huge numbers of variables and constraints. A low-complexity algorithm is proposed to obtain a suboptimal solution. Finally, numerical results further demonstrate the value of view synthesis-enabled multicast opportunities for MVV transmission in OFDMA systems.
[ { "created": "Sat, 28 Dec 2019 07:46:21 GMT", "version": "v1" } ]
2020-01-01
[ [ "Xu", "Wei", "" ], [ "Cui", "Ying", "" ], [ "Liu", "Zhi", "" ], [ "Li", "Haoran", "" ] ]
In this letter, we study the transmission of a multi-view video (MVV) to multiple users in an Orthogonal Frequency Division Multiple Access (OFDMA) system. To maximally improve transmission efficiency, we exploit both natural multicast opportunities and view synthesis-enabled multicast opportunities. First, we establish a communication model for transmission of a MVV to multiple users in an OFDMA system. Then, we formulate the minimization problem of the average weighted sum energy consumption for view transmission and synthesis with respect to view selection and transmission power and subcarrier allocation. The optimization problem is a challenging mixed discrete-continuous optimization problem with huge numbers of variables and constraints. A low-complexity algorithm is proposed to obtain a suboptimal solution. Finally, numerical results further demonstrate the value of view synthesis-enabled multicast opportunities for MVV transmission in OFDMA systems.
1906.04859
Yunhao Tang
Yunhao Tang, Shipra Agrawal, Yuri Faenza
Reinforcement Learning for Integer Programming: Learning to Cut
Accepted at International Conference on Machine Learning (ICML) 2020
null
null
null
cs.LG math.OC stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Integer programming (IP) is a general optimization framework widely applicable to a variety of unstructured and structured problems arising in, e.g., scheduling, production planning, and graph optimization. As IP models many provably hard to solve problems, modern IP solvers rely on many heuristics. These heuristics are usually human-designed, and naturally prone to suboptimality. The goal of this work is to show that the performance of those solvers can be greatly enhanced using reinforcement learning (RL). In particular, we investigate a specific methodology for solving IPs, known as the Cutting Plane Method. This method is employed as a subroutine by all modern IP solvers. We present a deep RL formulation, network architecture, and algorithms for intelligent adaptive selection of cutting planes (aka cuts). Across a wide range of IP tasks, we show that the trained RL agent significantly outperforms human-designed heuristics, and effectively generalizes to 10X larger instances and across IP problem classes. The trained agent is also demonstrated to benefit the popular downstream application of cutting plane methods in Branch-and-Cut algorithm, which is the backbone of state-of-the-art commercial IP solvers.
[ { "created": "Tue, 11 Jun 2019 23:14:46 GMT", "version": "v1" }, { "created": "Sun, 19 Jul 2020 20:34:20 GMT", "version": "v2" }, { "created": "Tue, 21 Jul 2020 12:57:19 GMT", "version": "v3" } ]
2020-07-22
[ [ "Tang", "Yunhao", "" ], [ "Agrawal", "Shipra", "" ], [ "Faenza", "Yuri", "" ] ]
Integer programming (IP) is a general optimization framework widely applicable to a variety of unstructured and structured problems arising in, e.g., scheduling, production planning, and graph optimization. As IP models many provably hard to solve problems, modern IP solvers rely on many heuristics. These heuristics are usually human-designed, and naturally prone to suboptimality. The goal of this work is to show that the performance of those solvers can be greatly enhanced using reinforcement learning (RL). In particular, we investigate a specific methodology for solving IPs, known as the Cutting Plane Method. This method is employed as a subroutine by all modern IP solvers. We present a deep RL formulation, network architecture, and algorithms for intelligent adaptive selection of cutting planes (aka cuts). Across a wide range of IP tasks, we show that the trained RL agent significantly outperforms human-designed heuristics, and effectively generalizes to 10X larger instances and across IP problem classes. The trained agent is also demonstrated to benefit the popular downstream application of cutting plane methods in Branch-and-Cut algorithm, which is the backbone of state-of-the-art commercial IP solvers.
2311.09141
Bruno Ziliotto
Andr\'es Cristi and Bruno Ziliotto
Prophet Inequalities Require Only a Constant Number of Samples
null
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In a prophet inequality problem, $n$ independent random variables are presented to a gambler one by one. The gambler decides when to stop the sequence and obtains the most recent value as reward. We evaluate a stopping rule by the worst-case ratio between its expected reward and the expectation of the maximum variable. In the classic setting, the order is fixed, and the optimal ratio is known to be 1/2. Three variants of this problem have been extensively studied: the prophet-secretary model, where variables arrive in uniformly random order; the free-order model, where the gambler chooses the arrival order; and the i.i.d. model, where the distributions are all the same, rendering the arrival order irrelevant. Most of the literature assumes that distributions are known to the gambler. Recent work has considered the question of what is achievable when the gambler has access only to a few samples per distribution. Surprisingly, in the fixed-order case, a single sample from each distribution is enough to approximate the optimal ratio, but this is not the case in any of the three variants. We provide a unified proof that for all three variants of the problem, a constant number of samples (independent of n) for each distribution is good enough to approximate the optimal ratios. Prior to our work, this was known to be the case only in the i.i.d. variant. We complement our result showing that our algorithms can be implemented in polynomial time. A key ingredient in our proof is an existential result based on a minimax argument, which states that there must exist an algorithm that attains the optimal ratio and does not rely on the knowledge of the upper tail of the distributions. A second key ingredient is a refined sample-based version of a decomposition of the instance into "small" and "large" variables, first introduced by Liu et al. [EC'21].
[ { "created": "Wed, 15 Nov 2023 17:35:04 GMT", "version": "v1" } ]
2023-11-16
[ [ "Cristi", "Andrés", "" ], [ "Ziliotto", "Bruno", "" ] ]
In a prophet inequality problem, $n$ independent random variables are presented to a gambler one by one. The gambler decides when to stop the sequence and obtains the most recent value as reward. We evaluate a stopping rule by the worst-case ratio between its expected reward and the expectation of the maximum variable. In the classic setting, the order is fixed, and the optimal ratio is known to be 1/2. Three variants of this problem have been extensively studied: the prophet-secretary model, where variables arrive in uniformly random order; the free-order model, where the gambler chooses the arrival order; and the i.i.d. model, where the distributions are all the same, rendering the arrival order irrelevant. Most of the literature assumes that distributions are known to the gambler. Recent work has considered the question of what is achievable when the gambler has access only to a few samples per distribution. Surprisingly, in the fixed-order case, a single sample from each distribution is enough to approximate the optimal ratio, but this is not the case in any of the three variants. We provide a unified proof that for all three variants of the problem, a constant number of samples (independent of n) for each distribution is good enough to approximate the optimal ratios. Prior to our work, this was known to be the case only in the i.i.d. variant. We complement our result showing that our algorithms can be implemented in polynomial time. A key ingredient in our proof is an existential result based on a minimax argument, which states that there must exist an algorithm that attains the optimal ratio and does not rely on the knowledge of the upper tail of the distributions. A second key ingredient is a refined sample-based version of a decomposition of the instance into "small" and "large" variables, first introduced by Liu et al. [EC'21].
2010.06631
Jialu Zhang
Jialu Zhang, Yitan Wang, Mark Santolucito and Ruzica Piskac
Succinct Explanations With Cascading Decision Trees
null
null
null
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The decision tree is one of the most popular and classical machine learning models from the 1980s. However, in many practical applications, decision trees tend to generate decision paths with excessive depth. Long decision paths often cause overfitting problems, and make models difficult to interpret. With longer decision paths, inference is also more likely to fail when the data contain missing values. In this work, we propose a new tree model called Cascading Decision Trees to alleviate this problem. The key insight of Cascading Decision Trees is to separate the decision path and the explanation path. Our experiments show that on average, Cascading Decision Trees generate 63.38% shorter explanation paths, avoiding overfitting and thus achieve higher test accuracy. We also empirically demonstrate that Cascading Decision Trees have advantages in the robustness against missing values.
[ { "created": "Tue, 13 Oct 2020 18:48:39 GMT", "version": "v1" }, { "created": "Tue, 29 Nov 2022 17:29:50 GMT", "version": "v2" } ]
2022-11-30
[ [ "Zhang", "Jialu", "" ], [ "Wang", "Yitan", "" ], [ "Santolucito", "Mark", "" ], [ "Piskac", "Ruzica", "" ] ]
The decision tree is one of the most popular and classical machine learning models from the 1980s. However, in many practical applications, decision trees tend to generate decision paths with excessive depth. Long decision paths often cause overfitting problems, and make models difficult to interpret. With longer decision paths, inference is also more likely to fail when the data contain missing values. In this work, we propose a new tree model called Cascading Decision Trees to alleviate this problem. The key insight of Cascading Decision Trees is to separate the decision path and the explanation path. Our experiments show that on average, Cascading Decision Trees generate 63.38% shorter explanation paths, avoiding overfitting and thus achieve higher test accuracy. We also empirically demonstrate that Cascading Decision Trees have advantages in the robustness against missing values.
2403.09415
Rishabh Haria
Rishabh Vallabh Varsha Haria, Amin El Abed, Sebastian Maneth
User Identification via Free Roaming Eye Tracking Data
null
null
null
null
cs.LG cs.HC
http://creativecommons.org/licenses/by/4.0/
We present a new dataset of "free roaming" (FR) and "targeted roaming" (TR): a pool of 41 participants is asked to walk around a university campus (FR) or is asked to find a particular room within a library (TR). Eye movements are recorded using a commodity wearable eye tracker (Pupil Labs Neon at 200Hz). On this dataset we investigate the accuracy of user identification using a previously known machine learning pipeline where a Radial Basis Function Network (RBFN) is used as classifier. Our highest accuracies are 87.3% for FR and 89.4% for TR. This should be compared to 95.3% which is the (corresponding) highest accuracy we are aware of (achieved in a laboratory setting using the "RAN" stimulus of the BioEye 2015 competition dataset). To the best of our knowledge, our results are the first that study user identification in a non laboratory setting; such settings are often more feasible than laboratory settings and may include further advantages. The minimum duration of each recording is 263s for FR and 154s for TR. Our best accuracies are obtained when restricting to 120s and 140s for FR and TR respectively, always cut from the end of the trajectories (both for the training and testing sessions). If we cut the same length from the beginning, then accuracies are 12.2% lower for FR and around 6.4% lower for TR. On the full trajectories accuracies are lower by 5% and 52% for FR and TR. We also investigate the impact of including higher order velocity derivatives (such as acceleration, jerk, or jounce).
[ { "created": "Thu, 14 Mar 2024 14:04:37 GMT", "version": "v1" } ]
2024-03-15
[ [ "Haria", "Rishabh Vallabh Varsha", "" ], [ "Abed", "Amin El", "" ], [ "Maneth", "Sebastian", "" ] ]
We present a new dataset of "free roaming" (FR) and "targeted roaming" (TR): a pool of 41 participants is asked to walk around a university campus (FR) or is asked to find a particular room within a library (TR). Eye movements are recorded using a commodity wearable eye tracker (Pupil Labs Neon at 200Hz). On this dataset we investigate the accuracy of user identification using a previously known machine learning pipeline where a Radial Basis Function Network (RBFN) is used as classifier. Our highest accuracies are 87.3% for FR and 89.4% for TR. This should be compared to 95.3% which is the (corresponding) highest accuracy we are aware of (achieved in a laboratory setting using the "RAN" stimulus of the BioEye 2015 competition dataset). To the best of our knowledge, our results are the first that study user identification in a non laboratory setting; such settings are often more feasible than laboratory settings and may include further advantages. The minimum duration of each recording is 263s for FR and 154s for TR. Our best accuracies are obtained when restricting to 120s and 140s for FR and TR respectively, always cut from the end of the trajectories (both for the training and testing sessions). If we cut the same length from the beginning, then accuracies are 12.2% lower for FR and around 6.4% lower for TR. On the full trajectories accuracies are lower by 5% and 52% for FR and TR. We also investigate the impact of including higher order velocity derivatives (such as acceleration, jerk, or jounce).
2212.05895
Juhua Liu
Haibin He, Xinyuan Chen, Chaoyue Wang, Juhua Liu, Bo Du, Dacheng Tao, Yu Qiao
Diff-Font: Diffusion Model for Robust One-Shot Font Generation
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Font generation is a difficult and time-consuming task, especially in those languages using ideograms that have complicated structures with a large number of characters, such as Chinese. To solve this problem, few-shot font generation and even one-shot font generation have attracted a lot of attention. However, most existing font generation methods may still suffer from (i) large cross-font gap challenge; (ii) subtle cross-font variation problem; and (iii) incorrect generation of complicated characters. In this paper, we propose a novel one-shot font generation method based on a diffusion model, named Diff-Font, which can be stably trained on large datasets. The proposed model aims to generate the entire font library by giving only one sample as the reference. Specifically, a large stroke-wise dataset is constructed, and a stroke-wise diffusion model is proposed to preserve the structure and the completion of each generated character. To our best knowledge, the proposed Diff-Font is the first work that developed diffusion models to handle the font generation task. The well-trained Diff-Font is not only robust to font gap and font variation, but also achieved promising performance on difficult character generation. Compared to previous font generation methods, our model reaches state-of-the-art performance both qualitatively and quantitatively.
[ { "created": "Mon, 12 Dec 2022 13:51:50 GMT", "version": "v1" }, { "created": "Thu, 6 Apr 2023 15:28:18 GMT", "version": "v2" }, { "created": "Sun, 7 May 2023 15:37:56 GMT", "version": "v3" } ]
2023-05-09
[ [ "He", "Haibin", "" ], [ "Chen", "Xinyuan", "" ], [ "Wang", "Chaoyue", "" ], [ "Liu", "Juhua", "" ], [ "Du", "Bo", "" ], [ "Tao", "Dacheng", "" ], [ "Qiao", "Yu", "" ] ]
Font generation is a difficult and time-consuming task, especially in those languages using ideograms that have complicated structures with a large number of characters, such as Chinese. To solve this problem, few-shot font generation and even one-shot font generation have attracted a lot of attention. However, most existing font generation methods may still suffer from (i) large cross-font gap challenge; (ii) subtle cross-font variation problem; and (iii) incorrect generation of complicated characters. In this paper, we propose a novel one-shot font generation method based on a diffusion model, named Diff-Font, which can be stably trained on large datasets. The proposed model aims to generate the entire font library by giving only one sample as the reference. Specifically, a large stroke-wise dataset is constructed, and a stroke-wise diffusion model is proposed to preserve the structure and the completion of each generated character. To our best knowledge, the proposed Diff-Font is the first work that developed diffusion models to handle the font generation task. The well-trained Diff-Font is not only robust to font gap and font variation, but also achieved promising performance on difficult character generation. Compared to previous font generation methods, our model reaches state-of-the-art performance both qualitatively and quantitatively.
0801.0523
Florent De Dinechin
Florent De Dinechin (LIP), Christoph Quirin Lauter (LIP), Guillaume Melquiond (LIP)
Certifying floating-point implementations using Gappa
null
null
null
null
cs.NA cs.MS
null
High confidence in floating-point programs requires proving numerical properties of final and intermediate values. One may need to guarantee that a value stays within some range, or that the error relative to some ideal value is well bounded. Such work may require several lines of proof for each line of code, and will usually be broken by the smallest change to the code (e.g. for maintenance or optimization purpose). Certifying these programs by hand is therefore very tedious and error-prone. This article discusses the use of the Gappa proof assistant in this context. Gappa has two main advantages over previous approaches: Its input format is very close to the actual C code to validate, and it automates error evaluation and propagation using interval arithmetic. Besides, it can be used to incrementally prove complex mathematical properties pertaining to the C code. Yet it does not require any specific knowledge about automatic theorem proving, and thus is accessible to a wide community. Moreover, Gappa may generate a formal proof of the results that can be checked independently by a lower-level proof assistant like Coq, hence providing an even higher confidence in the certification of the numerical code. The article demonstrates the use of this tool on a real-size example, an elementary function with correctly rounded output.
[ { "created": "Thu, 3 Jan 2008 13:34:03 GMT", "version": "v1" } ]
2008-01-04
[ [ "De Dinechin", "Florent", "", "LIP" ], [ "Lauter", "Christoph Quirin", "", "LIP" ], [ "Melquiond", "Guillaume", "", "LIP" ] ]
High confidence in floating-point programs requires proving numerical properties of final and intermediate values. One may need to guarantee that a value stays within some range, or that the error relative to some ideal value is well bounded. Such work may require several lines of proof for each line of code, and will usually be broken by the smallest change to the code (e.g. for maintenance or optimization purpose). Certifying these programs by hand is therefore very tedious and error-prone. This article discusses the use of the Gappa proof assistant in this context. Gappa has two main advantages over previous approaches: Its input format is very close to the actual C code to validate, and it automates error evaluation and propagation using interval arithmetic. Besides, it can be used to incrementally prove complex mathematical properties pertaining to the C code. Yet it does not require any specific knowledge about automatic theorem proving, and thus is accessible to a wide community. Moreover, Gappa may generate a formal proof of the results that can be checked independently by a lower-level proof assistant like Coq, hence providing an even higher confidence in the certification of the numerical code. The article demonstrates the use of this tool on a real-size example, an elementary function with correctly rounded output.
2212.08661
Feng Qiu
Feng Qiu, Chengyang Xie, Yu Ding, Wanzeng Kong
EffMulti: Efficiently Modeling Complex Multimodal Interactions for Emotion Analysis
6 pages,1 figure
null
null
null
cs.LG cs.AI cs.CL
http://creativecommons.org/licenses/by/4.0/
Humans are skilled in reading the interlocutor's emotion from multimodal signals, including spoken words, simultaneous speech, and facial expressions. It is still a challenge to effectively decode emotions from the complex interactions of multimodal signals. In this paper, we design three kinds of multimodal latent representations to refine the emotion analysis process and capture complex multimodal interactions from different views, including a intact three-modal integrating representation, a modality-shared representation, and three modality-individual representations. Then, a modality-semantic hierarchical fusion is proposed to reasonably incorporate these representations into a comprehensive interaction representation. The experimental results demonstrate that our EffMulti outperforms the state-of-the-art methods. The compelling performance benefits from its well-designed framework with ease of implementation, lower computing complexity, and less trainable parameters.
[ { "created": "Fri, 16 Dec 2022 03:05:55 GMT", "version": "v1" } ]
2022-12-20
[ [ "Qiu", "Feng", "" ], [ "Xie", "Chengyang", "" ], [ "Ding", "Yu", "" ], [ "Kong", "Wanzeng", "" ] ]
Humans are skilled in reading the interlocutor's emotion from multimodal signals, including spoken words, simultaneous speech, and facial expressions. It is still a challenge to effectively decode emotions from the complex interactions of multimodal signals. In this paper, we design three kinds of multimodal latent representations to refine the emotion analysis process and capture complex multimodal interactions from different views, including a intact three-modal integrating representation, a modality-shared representation, and three modality-individual representations. Then, a modality-semantic hierarchical fusion is proposed to reasonably incorporate these representations into a comprehensive interaction representation. The experimental results demonstrate that our EffMulti outperforms the state-of-the-art methods. The compelling performance benefits from its well-designed framework with ease of implementation, lower computing complexity, and less trainable parameters.
2106.14642
Li Meng
Li Meng, Anis Yazidi, Morten Goodwin, Paal Engelstad
Expert Q-learning: Deep Reinforcement Learning with Coarse State Values from Offline Expert Examples
Camera-ready version
Septentrio Academic, Tromso, Norway, 2022
10.7557/18.6237
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this article, we propose a novel algorithm for deep reinforcement learning named Expert Q-learning. Expert Q-learning is inspired by Dueling Q-learning and aims at incorporating semi-supervised learning into reinforcement learning through splitting Q-values into state values and action advantages. We require that an offline expert assesses the value of a state in a coarse manner using three discrete values. An expert network is designed in addition to the Q-network, which updates each time following the regular offline minibatch update whenever the expert example buffer is not empty. Using the board game Othello, we compare our algorithm with the baseline Q-learning algorithm, which is a combination of Double Q-learning and Dueling Q-learning. Our results show that Expert Q-learning is indeed useful and more resistant to the overestimation bias. The baseline Q-learning algorithm exhibits unstable and suboptimal behavior in non-deterministic settings, whereas Expert Q-learning demonstrates more robust performance with higher scores, illustrating that our algorithm is indeed suitable to integrate state values from expert examples into Q-learning.
[ { "created": "Mon, 28 Jun 2021 12:41:45 GMT", "version": "v1" }, { "created": "Tue, 29 Jun 2021 13:37:31 GMT", "version": "v2" }, { "created": "Wed, 2 Mar 2022 09:46:07 GMT", "version": "v3" }, { "created": "Mon, 24 Jun 2024 14:51:14 GMT", "version": "v4" }, { "created": "Tue, 25 Jun 2024 07:08:34 GMT", "version": "v5" } ]
2024-06-26
[ [ "Meng", "Li", "" ], [ "Yazidi", "Anis", "" ], [ "Goodwin", "Morten", "" ], [ "Engelstad", "Paal", "" ] ]
In this article, we propose a novel algorithm for deep reinforcement learning named Expert Q-learning. Expert Q-learning is inspired by Dueling Q-learning and aims at incorporating semi-supervised learning into reinforcement learning through splitting Q-values into state values and action advantages. We require that an offline expert assesses the value of a state in a coarse manner using three discrete values. An expert network is designed in addition to the Q-network, which updates each time following the regular offline minibatch update whenever the expert example buffer is not empty. Using the board game Othello, we compare our algorithm with the baseline Q-learning algorithm, which is a combination of Double Q-learning and Dueling Q-learning. Our results show that Expert Q-learning is indeed useful and more resistant to the overestimation bias. The baseline Q-learning algorithm exhibits unstable and suboptimal behavior in non-deterministic settings, whereas Expert Q-learning demonstrates more robust performance with higher scores, illustrating that our algorithm is indeed suitable to integrate state values from expert examples into Q-learning.
2004.13106
Beyza Ermis Ms
Beyza Ermis, Patrick Ernst, Yannik Stein, Giovanni Zappella
Learning to Rank in the Position Based Model with Bandit Feedback
null
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Personalization is a crucial aspect of many online experiences. In particular, content ranking is often a key component in delivering sophisticated personalization results. Commonly, supervised learning-to-rank methods are applied, which suffer from bias introduced during data collection by production systems in charge of producing the ranking. To compensate for this problem, we leverage contextual multi-armed bandits. We propose novel extensions of two well-known algorithms viz. LinUCB and Linear Thompson Sampling to the ranking use-case. To account for the biases in a production environment, we employ the position-based click model. Finally, we show the validity of the proposed algorithms by conducting extensive offline experiments on synthetic datasets as well as customer facing online A/B experiments.
[ { "created": "Mon, 27 Apr 2020 19:12:20 GMT", "version": "v1" } ]
2020-04-29
[ [ "Ermis", "Beyza", "" ], [ "Ernst", "Patrick", "" ], [ "Stein", "Yannik", "" ], [ "Zappella", "Giovanni", "" ] ]
Personalization is a crucial aspect of many online experiences. In particular, content ranking is often a key component in delivering sophisticated personalization results. Commonly, supervised learning-to-rank methods are applied, which suffer from bias introduced during data collection by production systems in charge of producing the ranking. To compensate for this problem, we leverage contextual multi-armed bandits. We propose novel extensions of two well-known algorithms viz. LinUCB and Linear Thompson Sampling to the ranking use-case. To account for the biases in a production environment, we employ the position-based click model. Finally, we show the validity of the proposed algorithms by conducting extensive offline experiments on synthetic datasets as well as customer facing online A/B experiments.
1503.01566
Harsh Tataria Mr.
Harsh Tataria, Mansoor Shafi, Peter J. Smith, Pawel A. Dmochowski
Coordinated Two-Tier Heterogeneous Cellular Networks with Leakage Based Beamforming
7 pages, 8 figures, submitted to IEEE International Conference on Communications (ICC) 4th International Workshop on Small Cells and 5G (SmallNets), London, UK, June 2015
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we demonstrate the rate gains achieved by two-tier heterogeneous cellular networks (HetNets) with varying degrees of coordination between macrocell and microcell base stations (BSs). We show that without the presence of coordination, network densification does not provide any gain in the sum rate and rapidly decreases the mean per-user signal-to-interference-plus-noise-ratio (SINR). Our results show that coordination reduces the rate of SINR decay with increasing numbers of microcell BSs in the system. Validity of the analytically approximated mean per-user SINR over a wide range of signal-to-noise-ratio (SNR) is demonstrated via comparison with the simulated results.
[ { "created": "Thu, 5 Mar 2015 07:50:06 GMT", "version": "v1" } ]
2015-03-06
[ [ "Tataria", "Harsh", "" ], [ "Shafi", "Mansoor", "" ], [ "Smith", "Peter J.", "" ], [ "Dmochowski", "Pawel A.", "" ] ]
In this paper we demonstrate the rate gains achieved by two-tier heterogeneous cellular networks (HetNets) with varying degrees of coordination between macrocell and microcell base stations (BSs). We show that without the presence of coordination, network densification does not provide any gain in the sum rate and rapidly decreases the mean per-user signal-to-interference-plus-noise-ratio (SINR). Our results show that coordination reduces the rate of SINR decay with increasing numbers of microcell BSs in the system. Validity of the analytically approximated mean per-user SINR over a wide range of signal-to-noise-ratio (SNR) is demonstrated via comparison with the simulated results.
2210.17222
Davide Salvi
Luigi Attorresi, Davide Salvi, Clara Borrelli, Paolo Bestagini, Stefano Tubaro
Combining Automatic Speaker Verification and Prosody Analysis for Synthetic Speech Detection
null
null
null
null
cs.SD cs.CV cs.MM eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The rapid spread of media content synthesis technology and the potentially damaging impact of audio and video deepfakes on people's lives have raised the need to implement systems able to detect these forgeries automatically. In this work we present a novel approach for synthetic speech detection, exploiting the combination of two high-level semantic properties of the human voice. On one side, we focus on speaker identity cues and represent them as speaker embeddings extracted using a state-of-the-art method for the automatic speaker verification task. On the other side, voice prosody, intended as variations in rhythm, pitch or accent in speech, is extracted through a specialized encoder. We show that the combination of these two embeddings fed to a supervised binary classifier allows the detection of deepfake speech generated with both Text-to-Speech and Voice Conversion techniques. Our results show improvements over the considered baselines, good generalization properties over multiple datasets and robustness to audio compression.
[ { "created": "Mon, 31 Oct 2022 11:03:03 GMT", "version": "v1" } ]
2022-11-01
[ [ "Attorresi", "Luigi", "" ], [ "Salvi", "Davide", "" ], [ "Borrelli", "Clara", "" ], [ "Bestagini", "Paolo", "" ], [ "Tubaro", "Stefano", "" ] ]
The rapid spread of media content synthesis technology and the potentially damaging impact of audio and video deepfakes on people's lives have raised the need to implement systems able to detect these forgeries automatically. In this work we present a novel approach for synthetic speech detection, exploiting the combination of two high-level semantic properties of the human voice. On one side, we focus on speaker identity cues and represent them as speaker embeddings extracted using a state-of-the-art method for the automatic speaker verification task. On the other side, voice prosody, intended as variations in rhythm, pitch or accent in speech, is extracted through a specialized encoder. We show that the combination of these two embeddings fed to a supervised binary classifier allows the detection of deepfake speech generated with both Text-to-Speech and Voice Conversion techniques. Our results show improvements over the considered baselines, good generalization properties over multiple datasets and robustness to audio compression.
2306.14237
Nikolaos Koursioumpas
Lina Magoula, Nikolaos Koursioumpas, Alexandros-Ioannis Thanopoulos, Theodora Panagea, Nikolaos Petropouleas, M. A. Gutierrez-Estevez, Ramin Khalili
A Safe Genetic Algorithm Approach for Energy Efficient Federated Learning in Wireless Communication Networks
6 pages, 6 figures, Accepted in IEEE PIMRC 2023 Conference, Latest revision with small corrections (typos etc.)
null
10.1109/PIMRC56721.2023.10293863
null
cs.NE cs.NI eess.SP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Federated Learning (FL) has emerged as a decentralized technique, where contrary to traditional centralized approaches, devices perform a model training in a collaborative manner, while preserving data privacy. Despite the existing efforts made in FL, its environmental impact is still under investigation, since several critical challenges regarding its applicability to wireless networks have been identified. Towards mitigating the carbon footprint of FL, the current work proposes a Genetic Algorithm (GA) approach, targeting the minimization of both the overall energy consumption of an FL process and any unnecessary resource utilization, by orchestrating the computational and communication resources of the involved devices, while guaranteeing a certain FL model performance target. A penalty function is introduced in the offline phase of the GA that penalizes the strategies that violate the constraints of the environment, ensuring a safe GA process. Evaluation results show the effectiveness of the proposed scheme compared to two state-of-the-art baseline solutions, achieving a decrease of up to 83% in the total energy consumption.
[ { "created": "Sun, 25 Jun 2023 13:10:38 GMT", "version": "v1" }, { "created": "Wed, 5 Jul 2023 10:14:52 GMT", "version": "v2" } ]
2023-11-07
[ [ "Magoula", "Lina", "" ], [ "Koursioumpas", "Nikolaos", "" ], [ "Thanopoulos", "Alexandros-Ioannis", "" ], [ "Panagea", "Theodora", "" ], [ "Petropouleas", "Nikolaos", "" ], [ "Gutierrez-Estevez", "M. A.", "" ], [ "Khalili", "Ramin", "" ] ]
Federated Learning (FL) has emerged as a decentralized technique, where contrary to traditional centralized approaches, devices perform a model training in a collaborative manner, while preserving data privacy. Despite the existing efforts made in FL, its environmental impact is still under investigation, since several critical challenges regarding its applicability to wireless networks have been identified. Towards mitigating the carbon footprint of FL, the current work proposes a Genetic Algorithm (GA) approach, targeting the minimization of both the overall energy consumption of an FL process and any unnecessary resource utilization, by orchestrating the computational and communication resources of the involved devices, while guaranteeing a certain FL model performance target. A penalty function is introduced in the offline phase of the GA that penalizes the strategies that violate the constraints of the environment, ensuring a safe GA process. Evaluation results show the effectiveness of the proposed scheme compared to two state-of-the-art baseline solutions, achieving a decrease of up to 83% in the total energy consumption.
0806.4468
Rui Zhang
Rui Zhang, Shuguang Cui, and Ying-Chang Liang
On Ergodic Sum Capacity of Fading Cognitive Multiple-Access and Broadcast Channels
To appear in IEEE Transactions on Information Theory
null
10.1109/TIT.2009.2030449
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper studies the information-theoretic limits of a secondary or cognitive radio (CR) network under spectrum sharing with an existing primary radio network. In particular, the fading cognitive multiple-access channel (C-MAC) is first studied, where multiple secondary users transmit to the secondary base station (BS) under both individual transmit-power constraints and a set of interference-power constraints each applied at one of the primary receivers. This paper considers the long-term (LT) or the short-term (ST) transmit-power constraint over the fading states at each secondary transmitter, combined with the LT or ST interference-power constraint at each primary receiver. In each case, the optimal power allocation scheme is derived for the secondary users to achieve the ergodic sum capacity of the fading C-MAC, as well as the conditions for the optimality of the dynamic time-division-multiple-access (D-TDMA) scheme in the secondary network. The fading cognitive broadcast channel (C-BC) that models the downlink transmission in the secondary network is then studied under the LT/ST transmit-power constraint at the secondary BS jointly with the LT/ST interference-power constraint at each of the primary receivers. It is shown that D-TDMA is indeed optimal for achieving the ergodic sum capacity of the fading C-BC for all combinations of transmit-power and interference-power constraints.
[ { "created": "Fri, 27 Jun 2008 09:32:01 GMT", "version": "v1" }, { "created": "Mon, 3 Aug 2009 05:59:45 GMT", "version": "v2" } ]
2016-11-18
[ [ "Zhang", "Rui", "" ], [ "Cui", "Shuguang", "" ], [ "Liang", "Ying-Chang", "" ] ]
This paper studies the information-theoretic limits of a secondary or cognitive radio (CR) network under spectrum sharing with an existing primary radio network. In particular, the fading cognitive multiple-access channel (C-MAC) is first studied, where multiple secondary users transmit to the secondary base station (BS) under both individual transmit-power constraints and a set of interference-power constraints each applied at one of the primary receivers. This paper considers the long-term (LT) or the short-term (ST) transmit-power constraint over the fading states at each secondary transmitter, combined with the LT or ST interference-power constraint at each primary receiver. In each case, the optimal power allocation scheme is derived for the secondary users to achieve the ergodic sum capacity of the fading C-MAC, as well as the conditions for the optimality of the dynamic time-division-multiple-access (D-TDMA) scheme in the secondary network. The fading cognitive broadcast channel (C-BC) that models the downlink transmission in the secondary network is then studied under the LT/ST transmit-power constraint at the secondary BS jointly with the LT/ST interference-power constraint at each of the primary receivers. It is shown that D-TDMA is indeed optimal for achieving the ergodic sum capacity of the fading C-BC for all combinations of transmit-power and interference-power constraints.
2309.14054
Piyush Tiwary
Piyush Tiwary, Atri Guha, Subhodip Panda, Prathosh A.P
Adapt then Unlearn: Exploiting Parameter Space Semantics for Unlearning in Generative Adversarial Networks
15 pages, 12 figures
null
null
null
cs.LG cs.AI cs.CV
http://creativecommons.org/licenses/by/4.0/
The increased attention to regulating the outputs of deep generative models, driven by growing concerns about privacy and regulatory compliance, has highlighted the need for effective control over these models. This necessity arises from instances where generative models produce outputs containing undesirable, offensive, or potentially harmful content. To tackle this challenge, the concept of machine unlearning has emerged, aiming to forget specific learned information or to erase the influence of undesired data subsets from a trained model. The objective of this work is to prevent the generation of outputs containing undesired features from a pre-trained GAN where the underlying training data set is inaccessible. Our approach is inspired by a crucial observation: the parameter space of GANs exhibits meaningful directions that can be leveraged to suppress specific undesired features. However, such directions usually result in the degradation of the quality of generated samples. Our proposed method, known as 'Adapt-then-Unlearn,' excels at unlearning such undesirable features while also maintaining the quality of generated samples. This method unfolds in two stages: in the initial stage, we adapt the pre-trained GAN using negative samples provided by the user, while in the subsequent stage, we focus on unlearning the undesired feature. During the latter phase, we train the pre-trained GAN using positive samples, incorporating a repulsion regularizer. This regularizer encourages the model's parameters to be away from the parameters associated with the adapted model from the first stage while also maintaining the quality of generated samples. To the best of our knowledge, our approach stands as first method addressing unlearning in GANs. We validate the effectiveness of our method through comprehensive experiments.
[ { "created": "Mon, 25 Sep 2023 11:36:20 GMT", "version": "v1" } ]
2023-09-26
[ [ "Tiwary", "Piyush", "" ], [ "Guha", "Atri", "" ], [ "Panda", "Subhodip", "" ], [ "P", "Prathosh A.", "" ] ]
The increased attention to regulating the outputs of deep generative models, driven by growing concerns about privacy and regulatory compliance, has highlighted the need for effective control over these models. This necessity arises from instances where generative models produce outputs containing undesirable, offensive, or potentially harmful content. To tackle this challenge, the concept of machine unlearning has emerged, aiming to forget specific learned information or to erase the influence of undesired data subsets from a trained model. The objective of this work is to prevent the generation of outputs containing undesired features from a pre-trained GAN where the underlying training data set is inaccessible. Our approach is inspired by a crucial observation: the parameter space of GANs exhibits meaningful directions that can be leveraged to suppress specific undesired features. However, such directions usually result in the degradation of the quality of generated samples. Our proposed method, known as 'Adapt-then-Unlearn,' excels at unlearning such undesirable features while also maintaining the quality of generated samples. This method unfolds in two stages: in the initial stage, we adapt the pre-trained GAN using negative samples provided by the user, while in the subsequent stage, we focus on unlearning the undesired feature. During the latter phase, we train the pre-trained GAN using positive samples, incorporating a repulsion regularizer. This regularizer encourages the model's parameters to be away from the parameters associated with the adapted model from the first stage while also maintaining the quality of generated samples. To the best of our knowledge, our approach stands as first method addressing unlearning in GANs. We validate the effectiveness of our method through comprehensive experiments.
2204.03831
Wooyoung Kim
Wooyoung Kim, Chaerin Jo, Minjung Kim and Wooju Kim
Marvelous Agglutinative Language Effect on Cross Lingual Transfer Learning
ICEC2022 Oral
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
As for multilingual language models, it is important to select languages for training because of the curse of multilinguality. It is known that using languages with similar language structures is effective for cross lingual transfer learning. However, we demonstrate that using agglutinative languages such as Korean is more effective in cross lingual transfer learning. This is a great discovery that will change the training strategy of cross lingual transfer learning.
[ { "created": "Fri, 8 Apr 2022 04:04:45 GMT", "version": "v1" }, { "created": "Thu, 23 May 2024 07:10:43 GMT", "version": "v2" }, { "created": "Fri, 24 May 2024 07:13:18 GMT", "version": "v3" } ]
2024-05-27
[ [ "Kim", "Wooyoung", "" ], [ "Jo", "Chaerin", "" ], [ "Kim", "Minjung", "" ], [ "Kim", "Wooju", "" ] ]
As for multilingual language models, it is important to select languages for training because of the curse of multilinguality. It is known that using languages with similar language structures is effective for cross lingual transfer learning. However, we demonstrate that using agglutinative languages such as Korean is more effective in cross lingual transfer learning. This is a great discovery that will change the training strategy of cross lingual transfer learning.
2305.16440
Navjot Singh
Navjot Singh, Suhas Diggavi
Representation Transfer Learning via Multiple Pre-trained models for Linear Regression
20 pages
null
null
null
cs.LG stat.ML
http://creativecommons.org/licenses/by/4.0/
In this paper, we consider the problem of learning a linear regression model on a data domain of interest (target) given few samples. To aid learning, we are provided with a set of pre-trained regression models that are trained on potentially different data domains (sources). Assuming a representation structure for the data generating linear models at the sources and the target domains, we propose a representation transfer based learning method for constructing the target model. The proposed scheme is comprised of two phases: (i) utilizing the different source representations to construct a representation that is adapted to the target data, and (ii) using the obtained model as an initialization to a fine-tuning procedure that re-trains the entire (over-parameterized) regression model on the target data. For each phase of the training method, we provide excess risk bounds for the learned model compared to the true data generating target model. The derived bounds show a gain in sample complexity for our proposed method compared to the baseline method of not leveraging source representations when achieving the same excess risk, therefore, theoretically demonstrating the effectiveness of transfer learning for linear regression.
[ { "created": "Thu, 25 May 2023 19:35:24 GMT", "version": "v1" }, { "created": "Sun, 25 Jun 2023 01:16:32 GMT", "version": "v2" } ]
2023-06-27
[ [ "Singh", "Navjot", "" ], [ "Diggavi", "Suhas", "" ] ]
In this paper, we consider the problem of learning a linear regression model on a data domain of interest (target) given few samples. To aid learning, we are provided with a set of pre-trained regression models that are trained on potentially different data domains (sources). Assuming a representation structure for the data generating linear models at the sources and the target domains, we propose a representation transfer based learning method for constructing the target model. The proposed scheme is comprised of two phases: (i) utilizing the different source representations to construct a representation that is adapted to the target data, and (ii) using the obtained model as an initialization to a fine-tuning procedure that re-trains the entire (over-parameterized) regression model on the target data. For each phase of the training method, we provide excess risk bounds for the learned model compared to the true data generating target model. The derived bounds show a gain in sample complexity for our proposed method compared to the baseline method of not leveraging source representations when achieving the same excess risk, therefore, theoretically demonstrating the effectiveness of transfer learning for linear regression.
1810.05561
Prathamesh Mayekar
Prathamesh Mayekar, Parimal Parag, and Himanshu Tyagi
Optimal Source Codes for Timely Updates
Added a missing reference, in IEEE Transactions on Information Theory, 2020
IEEE Transactions on Information Theory, vol. 66, no. 6, pp. 3714--3731, June 2020
10.1109/TIT.2020.2983151
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A transmitter observing a sequence of independent and identically distributed random variables seeks to keep a receiver updated about its latest observations. The receiver need not be apprised about each symbol seen by the transmitter, but needs to output a symbol at each time instant $t$. If at time $t$ the receiver outputs the symbol seen by the transmitter at time $U(t)\leq t$, the age of information at the receiver at time $t$ is $t-U(t)$. We study the design of lossless source codes that enable transmission with minimum average age at the receiver. We show that the asymptotic minimum average age can be attained up to a constant gap by the Shannon codes for a tilted version of the original pmf generating the symbols, which can be computed easily by solving an optimization problem. Furthermore, we exhibit an example with alphabet $\X$ where Shannon codes for the original pmf incur an asymptotic average age of a factor $O(\sqrt{\log |\X|})$ more than that achieved by our codes. Underlying our prescription for optimal codes is a new variational formula for integer moments of random variables, which may be of independent interest. Also, we discuss possible extensions of our formulation to randomized schemes and to the erasure channel, and include a treatment of the related problem of source coding for minimum average queuing delay.
[ { "created": "Fri, 12 Oct 2018 14:59:13 GMT", "version": "v1" }, { "created": "Mon, 7 Jan 2019 09:45:45 GMT", "version": "v2" }, { "created": "Fri, 27 Mar 2020 05:32:01 GMT", "version": "v3" } ]
2021-03-26
[ [ "Mayekar", "Prathamesh", "" ], [ "Parag", "Parimal", "" ], [ "Tyagi", "Himanshu", "" ] ]
A transmitter observing a sequence of independent and identically distributed random variables seeks to keep a receiver updated about its latest observations. The receiver need not be apprised about each symbol seen by the transmitter, but needs to output a symbol at each time instant $t$. If at time $t$ the receiver outputs the symbol seen by the transmitter at time $U(t)\leq t$, the age of information at the receiver at time $t$ is $t-U(t)$. We study the design of lossless source codes that enable transmission with minimum average age at the receiver. We show that the asymptotic minimum average age can be attained up to a constant gap by the Shannon codes for a tilted version of the original pmf generating the symbols, which can be computed easily by solving an optimization problem. Furthermore, we exhibit an example with alphabet $\X$ where Shannon codes for the original pmf incur an asymptotic average age of a factor $O(\sqrt{\log |\X|})$ more than that achieved by our codes. Underlying our prescription for optimal codes is a new variational formula for integer moments of random variables, which may be of independent interest. Also, we discuss possible extensions of our formulation to randomized schemes and to the erasure channel, and include a treatment of the related problem of source coding for minimum average queuing delay.
1202.3470
Markus Jalsenius
Raphael Clifford, Markus Jalsenius, Ely Porat, Benjamin Sach
Pattern Matching in Multiple Streams
13 pages, 1 figure
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We investigate the problem of deterministic pattern matching in multiple streams. In this model, one symbol arrives at a time and is associated with one of s streaming texts. The task at each time step is to report if there is a new match between a fixed pattern of length m and a newly updated stream. As is usual in the streaming context, the goal is to use as little space as possible while still reporting matches quickly. We give almost matching upper and lower space bounds for three distinct pattern matching problems. For exact matching we show that the problem can be solved in constant time per arriving symbol and O(m+s) words of space. For the k-mismatch and k-difference problems we give O(k) time solutions that require O(m+ks) words of space. In all three cases we also give space lower bounds which show our methods are optimal up to a single logarithmic factor. Finally we set out a number of open problems related to this new model for pattern matching.
[ { "created": "Wed, 15 Feb 2012 23:11:48 GMT", "version": "v1" }, { "created": "Wed, 25 Apr 2012 13:54:14 GMT", "version": "v2" } ]
2012-04-26
[ [ "Clifford", "Raphael", "" ], [ "Jalsenius", "Markus", "" ], [ "Porat", "Ely", "" ], [ "Sach", "Benjamin", "" ] ]
We investigate the problem of deterministic pattern matching in multiple streams. In this model, one symbol arrives at a time and is associated with one of s streaming texts. The task at each time step is to report if there is a new match between a fixed pattern of length m and a newly updated stream. As is usual in the streaming context, the goal is to use as little space as possible while still reporting matches quickly. We give almost matching upper and lower space bounds for three distinct pattern matching problems. For exact matching we show that the problem can be solved in constant time per arriving symbol and O(m+s) words of space. For the k-mismatch and k-difference problems we give O(k) time solutions that require O(m+ks) words of space. In all three cases we also give space lower bounds which show our methods are optimal up to a single logarithmic factor. Finally we set out a number of open problems related to this new model for pattern matching.
2208.09814
Hideaki Iiduka
Hideaki Iiduka
Critical Bach Size Minimizes Stochastic First-Order Oracle Complexity of Deep Learning Optimizer using Hyperparameters Close to One
arXiv admin note: text overlap with arXiv:2112.07163
null
null
null
cs.LG math.OC
http://creativecommons.org/licenses/by/4.0/
Practical results have shown that deep learning optimizers using small constant learning rates, hyperparameters close to one, and large batch sizes can find the model parameters of deep neural networks that minimize the loss functions. We first show theoretical evidence that the momentum method (Momentum) and adaptive moment estimation (Adam) perform well in the sense that the upper bound of the theoretical performance measure is small with a small constant learning rate, hyperparameters close to one, and a large batch size. Next, we show that there exists a batch size called the critical batch size minimizing the stochastic first-order oracle (SFO) complexity, which is the stochastic gradient computation cost, and that SFO complexity increases once the batch size exceeds the critical batch size. Finally, we provide numerical results that support our theoretical results. That is, the numerical results indicate that Adam using a small constant learning rate, hyperparameters close to one, and the critical batch size minimizing SFO complexity has faster convergence than Momentum and stochastic gradient descent (SGD).
[ { "created": "Sun, 21 Aug 2022 06:11:23 GMT", "version": "v1" } ]
2022-08-23
[ [ "Iiduka", "Hideaki", "" ] ]
Practical results have shown that deep learning optimizers using small constant learning rates, hyperparameters close to one, and large batch sizes can find the model parameters of deep neural networks that minimize the loss functions. We first show theoretical evidence that the momentum method (Momentum) and adaptive moment estimation (Adam) perform well in the sense that the upper bound of the theoretical performance measure is small with a small constant learning rate, hyperparameters close to one, and a large batch size. Next, we show that there exists a batch size called the critical batch size minimizing the stochastic first-order oracle (SFO) complexity, which is the stochastic gradient computation cost, and that SFO complexity increases once the batch size exceeds the critical batch size. Finally, we provide numerical results that support our theoretical results. That is, the numerical results indicate that Adam using a small constant learning rate, hyperparameters close to one, and the critical batch size minimizing SFO complexity has faster convergence than Momentum and stochastic gradient descent (SGD).
2010.02787
Maximilian Katzmann
Thomas Bl\"asius, Tobias Friedrich, Maximilian Katzmann
Efficiently Approximating Vertex Cover on Scale-Free Networks with Underlying Hyperbolic Geometry
null
null
10.1007/s00453-023-01143-x
null
cs.DS
http://creativecommons.org/licenses/by/4.0/
Finding a minimum vertex cover in a network is a fundamental NP-complete graph problem. One way to deal with its computational hardness, is to trade the qualitative performance of an algorithm (allowing non-optimal outputs) for an improved running time. For the vertex cover problem, there is a gap between theory and practice when it comes to understanding this tradeoff. On the one hand, it is known that it is NP-hard to approximate a minimum vertex cover within a factor of $\sqrt{2}$. On the other hand, a simple greedy algorithm yields close to optimal approximations in practice. A promising approach towards understanding this discrepancy is to recognize the differences between theoretical worst-case instances and real-world networks. Following this direction, we close the gap between theory and practice by providing an algorithm that efficiently computes nearly optimal vertex cover approximations on hyperbolic random graphs; a network model that closely resembles real-world networks in terms of degree distribution, clustering, and the small-world property. More precisely, our algorithm computes a $(1 + o(1))$-approximation, asymptotically almost surely, and has a running time of $\mathcal{O}(m \log(n))$. The proposed algorithm is an adaptation of the successful greedy approach, enhanced with a procedure that improves on parts of the graph where greedy is not optimal. This makes it possible to introduce a parameter that can be used to tune the tradeoff between approximation performance and running time. Our empirical evaluation on real-world networks shows that this allows for improving over the near-optimal results of the greedy approach.
[ { "created": "Tue, 6 Oct 2020 14:56:48 GMT", "version": "v1" }, { "created": "Fri, 25 Jun 2021 11:04:42 GMT", "version": "v2" }, { "created": "Wed, 8 Dec 2021 14:56:02 GMT", "version": "v3" }, { "created": "Wed, 13 Dec 2023 09:02:07 GMT", "version": "v4" } ]
2023-12-14
[ [ "Bläsius", "Thomas", "" ], [ "Friedrich", "Tobias", "" ], [ "Katzmann", "Maximilian", "" ] ]
Finding a minimum vertex cover in a network is a fundamental NP-complete graph problem. One way to deal with its computational hardness, is to trade the qualitative performance of an algorithm (allowing non-optimal outputs) for an improved running time. For the vertex cover problem, there is a gap between theory and practice when it comes to understanding this tradeoff. On the one hand, it is known that it is NP-hard to approximate a minimum vertex cover within a factor of $\sqrt{2}$. On the other hand, a simple greedy algorithm yields close to optimal approximations in practice. A promising approach towards understanding this discrepancy is to recognize the differences between theoretical worst-case instances and real-world networks. Following this direction, we close the gap between theory and practice by providing an algorithm that efficiently computes nearly optimal vertex cover approximations on hyperbolic random graphs; a network model that closely resembles real-world networks in terms of degree distribution, clustering, and the small-world property. More precisely, our algorithm computes a $(1 + o(1))$-approximation, asymptotically almost surely, and has a running time of $\mathcal{O}(m \log(n))$. The proposed algorithm is an adaptation of the successful greedy approach, enhanced with a procedure that improves on parts of the graph where greedy is not optimal. This makes it possible to introduce a parameter that can be used to tune the tradeoff between approximation performance and running time. Our empirical evaluation on real-world networks shows that this allows for improving over the near-optimal results of the greedy approach.
1709.00643
Qifeng Chen
Qifeng Chen, Jia Xu, Vladlen Koltun
Fast Image Processing with Fully-Convolutional Networks
Published at the International Conference on Computer Vision (ICCV 2017)
null
null
null
cs.CV cs.GR cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present an approach to accelerating a wide variety of image processing operators. Our approach uses a fully-convolutional network that is trained on input-output pairs that demonstrate the operator's action. After training, the original operator need not be run at all. The trained network operates at full resolution and runs in constant time. We investigate the effect of network architecture on approximation accuracy, runtime, and memory footprint, and identify a specific architecture that balances these considerations. We evaluate the presented approach on ten advanced image processing operators, including multiple variational models, multiscale tone and detail manipulation, photographic style transfer, nonlocal dehazing, and nonphotorealistic stylization. All operators are approximated by the same model. Experiments demonstrate that the presented approach is significantly more accurate than prior approximation schemes. It increases approximation accuracy as measured by PSNR across the evaluated operators by 8.5 dB on the MIT-Adobe dataset (from 27.5 to 36 dB) and reduces DSSIM by a multiplicative factor of 3 compared to the most accurate prior approximation scheme, while being the fastest. We show that our models generalize across datasets and across resolutions, and investigate a number of extensions of the presented approach. The results are shown in the supplementary video at https://youtu.be/eQyfHgLx8Dc
[ { "created": "Sat, 2 Sep 2017 22:38:13 GMT", "version": "v1" } ]
2017-09-05
[ [ "Chen", "Qifeng", "" ], [ "Xu", "Jia", "" ], [ "Koltun", "Vladlen", "" ] ]
We present an approach to accelerating a wide variety of image processing operators. Our approach uses a fully-convolutional network that is trained on input-output pairs that demonstrate the operator's action. After training, the original operator need not be run at all. The trained network operates at full resolution and runs in constant time. We investigate the effect of network architecture on approximation accuracy, runtime, and memory footprint, and identify a specific architecture that balances these considerations. We evaluate the presented approach on ten advanced image processing operators, including multiple variational models, multiscale tone and detail manipulation, photographic style transfer, nonlocal dehazing, and nonphotorealistic stylization. All operators are approximated by the same model. Experiments demonstrate that the presented approach is significantly more accurate than prior approximation schemes. It increases approximation accuracy as measured by PSNR across the evaluated operators by 8.5 dB on the MIT-Adobe dataset (from 27.5 to 36 dB) and reduces DSSIM by a multiplicative factor of 3 compared to the most accurate prior approximation scheme, while being the fastest. We show that our models generalize across datasets and across resolutions, and investigate a number of extensions of the presented approach. The results are shown in the supplementary video at https://youtu.be/eQyfHgLx8Dc