id
stringlengths 9
10
| submitter
stringlengths 1
64
⌀ | authors
stringlengths 4
20.7k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
404
⌀ | doi
stringlengths 11
153
⌀ | report-no
stringlengths 2
254
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | orig_abstract
stringlengths 14
3.35k
| versions
listlengths 1
60
| update_date
stringlengths 10
10
| authors_parsed
listlengths 1
1.35k
| abstract
stringlengths 11
3.34k
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2305.06185
|
Chidi Agbo
|
Chidi Agbo, Hoda Mehrpouyan
|
Conflict Analysis and Resolution of Safety and Security Boundary
Conditions for Industrial Control Systems
|
12 pages, 10 figures, 2022 6th International Conference on System
Reliability and Safety (ICSRS)|978-1-6654-7092-6 @IEEE
| null |
10.1109/ICSRS56243.2022.10067393
| null |
cs.CR
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Safety and security are the two most important properties of industrial
control systems (ICS), and their integration is necessary to ensure that safety
goals do not undermine security goals and vice versa. Sometimes, safety and
security co-engineering leads to conflicting requirements or violations capable
of impacting the normal behavior of the system. Identification, analysis, and
resolution of conflicts arising from safety and security co-engineering is a
major challenge, an under-researched area in safety-critical systems(ICS). This
paper presents an STPA-SafeSec-CDCL approach that addresses the challenge. Our
proposed methodology combines the STPA-SafeSec approach for safety and security
analysis and the Conflict-Driven Clause Learning (CDCL) approach for the
identification, analysis, and resolution of conflicts where conflicting
constraints are encoded in satisfiability (SAT) problems. We apply our
framework to the Tennessee Eastman Plant process model, a chemical process
model developed specifically for the study of industrial control processes, to
demonstrate how to use the proposed method. Our methodology goes beyond the
requirement analysis phase and can be applied to the early stages of system
design and development to increase system reliability, robustness, and
resilience.
|
[
{
"created": "Wed, 10 May 2023 14:16:49 GMT",
"version": "v1"
}
] |
2023-05-11
|
[
[
"Agbo",
"Chidi",
""
],
[
"Mehrpouyan",
"Hoda",
""
]
] |
Safety and security are the two most important properties of industrial control systems (ICS), and their integration is necessary to ensure that safety goals do not undermine security goals and vice versa. Sometimes, safety and security co-engineering leads to conflicting requirements or violations capable of impacting the normal behavior of the system. Identification, analysis, and resolution of conflicts arising from safety and security co-engineering is a major challenge, an under-researched area in safety-critical systems(ICS). This paper presents an STPA-SafeSec-CDCL approach that addresses the challenge. Our proposed methodology combines the STPA-SafeSec approach for safety and security analysis and the Conflict-Driven Clause Learning (CDCL) approach for the identification, analysis, and resolution of conflicts where conflicting constraints are encoded in satisfiability (SAT) problems. We apply our framework to the Tennessee Eastman Plant process model, a chemical process model developed specifically for the study of industrial control processes, to demonstrate how to use the proposed method. Our methodology goes beyond the requirement analysis phase and can be applied to the early stages of system design and development to increase system reliability, robustness, and resilience.
|
cs/0504044
|
Richard McClatchey
|
Michael Thomas, Conrad Steenberg, Frank van Lingen, Harvey Newman,
Julian Bunn, Arshad Ali, Richard McClatchey, Ashiq Anjum, Tahir Azim, Waqas
ur Rehman, Faisal Khan, Jang Uk In
|
JClarens: A Java Framework for Developing and Deploying Web Services for
Grid Computing
|
8 pages, 4 figures. Paper at the 3rd IEEE International Conference on
Web Services (ICWS05). Florida, USA. July 2005
| null | null | null |
cs.DC
| null |
High Energy Physics (HEP) and other scientific communities have adopted
Service Oriented Architectures (SOA) as part of a larger Grid computing effort.
This effort involves the integration of many legacy applications and
programming libraries into a SOA framework. The Grid Analysis Environment (GAE)
is such a service oriented architecture based on the Clarens Grid Services
Framework and is being developed as part of the Compact Muon Solenoid (CMS)
experiment at the Large Hadron Collider (LHC) at European Laboratory for
Particle Physics (CERN). Clarens provides a set of authorization, access
control, and discovery services, as well as XMLRPC and SOAP access to all
deployed services. Two implementations of the Clarens Web Services Framework
(Python and Java) offer integration possibilities for a wide range of
programming languages. This paper describes the Java implementation of the
Clarens Web Services Framework called JClarens. and several web services of
interest to the scientific and Grid community that have been deployed using
JClarens.
|
[
{
"created": "Mon, 11 Apr 2005 21:45:07 GMT",
"version": "v1"
}
] |
2007-05-23
|
[
[
"Thomas",
"Michael",
""
],
[
"Steenberg",
"Conrad",
""
],
[
"van Lingen",
"Frank",
""
],
[
"Newman",
"Harvey",
""
],
[
"Bunn",
"Julian",
""
],
[
"Ali",
"Arshad",
""
],
[
"McClatchey",
"Richard",
""
],
[
"Anjum",
"Ashiq",
""
],
[
"Azim",
"Tahir",
""
],
[
"Rehman",
"Waqas ur",
""
],
[
"Khan",
"Faisal",
""
],
[
"In",
"Jang Uk",
""
]
] |
High Energy Physics (HEP) and other scientific communities have adopted Service Oriented Architectures (SOA) as part of a larger Grid computing effort. This effort involves the integration of many legacy applications and programming libraries into a SOA framework. The Grid Analysis Environment (GAE) is such a service oriented architecture based on the Clarens Grid Services Framework and is being developed as part of the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) at European Laboratory for Particle Physics (CERN). Clarens provides a set of authorization, access control, and discovery services, as well as XMLRPC and SOAP access to all deployed services. Two implementations of the Clarens Web Services Framework (Python and Java) offer integration possibilities for a wide range of programming languages. This paper describes the Java implementation of the Clarens Web Services Framework called JClarens. and several web services of interest to the scientific and Grid community that have been deployed using JClarens.
|
1409.5872
|
Peter Schrammel
|
Peter Schrammel, Daniel Kroening, Martin Brain, Ruben Martins, Tino
Teige, Tom Bienm\"uller
|
Incremental Bounded Model Checking for Embedded Software (extended
version)
|
extended version of paper submitted to EMSOFT'14
| null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Program analysis is on the brink of mainstream in embedded systems
development. Formal verification of behavioural requirements, finding runtime
errors and automated test case generation are some of the most common
applications of automated verification tools based on Bounded Model Checking.
Existing industrial tools for embedded software use an off-the-shelf Bounded
Model Checker and apply it iteratively to verify the program with an increasing
number of unwindings. This approach unnecessarily wastes time repeating work
that has already been done and fails to exploit the power of incremental SAT
solving. This paper reports on the extension of the software model checker CBMC
to support incremental Bounded Model Checking and its successful integration
with the industrial embedded software verification tool BTC EmbeddedTester. We
present an extensive evaluation over large industrial embedded programs, which
shows that incremental Bounded Model Checking cuts runtimes by one order of
magnitude in comparison to the standard non-incremental approach, enabling the
application of formal verification to large and complex embedded software.
|
[
{
"created": "Sat, 20 Sep 2014 09:14:04 GMT",
"version": "v1"
}
] |
2014-09-23
|
[
[
"Schrammel",
"Peter",
""
],
[
"Kroening",
"Daniel",
""
],
[
"Brain",
"Martin",
""
],
[
"Martins",
"Ruben",
""
],
[
"Teige",
"Tino",
""
],
[
"Bienmüller",
"Tom",
""
]
] |
Program analysis is on the brink of mainstream in embedded systems development. Formal verification of behavioural requirements, finding runtime errors and automated test case generation are some of the most common applications of automated verification tools based on Bounded Model Checking. Existing industrial tools for embedded software use an off-the-shelf Bounded Model Checker and apply it iteratively to verify the program with an increasing number of unwindings. This approach unnecessarily wastes time repeating work that has already been done and fails to exploit the power of incremental SAT solving. This paper reports on the extension of the software model checker CBMC to support incremental Bounded Model Checking and its successful integration with the industrial embedded software verification tool BTC EmbeddedTester. We present an extensive evaluation over large industrial embedded programs, which shows that incremental Bounded Model Checking cuts runtimes by one order of magnitude in comparison to the standard non-incremental approach, enabling the application of formal verification to large and complex embedded software.
|
2110.03522
|
Jules Leguy
|
Jules Leguy, Thomas Cauchy, Beatrice Duval, Benoit Da Mota
|
Surrogate-Based Black-Box Optimization Method for Costly Molecular
Properties
|
Submitted to ICTAI 2021
| null | null | null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
AI-assisted molecular optimization is a very active research field as it is
expected to provide the next-generation drugs and molecular materials. An
important difficulty is that the properties to be optimized rely on costly
evaluations. Machine learning methods are investigated with success to predict
these properties, but show generalization issues on less known areas of the
chemical space. We propose here a surrogate-based black box optimization
method, to tackle jointly the optimization and machine learning problems. It
consists in optimizing the expected improvement of the surrogate of a molecular
property using an evolutionary algorithm. The surrogate is defined as a
Gaussian Process Regression (GPR) model, learned on a relevant area of the
search space with respect to the property to be optimized. We show that our
approach can successfully optimize a costly property of interest much faster
than a purely metaheuristic approach.
|
[
{
"created": "Fri, 1 Oct 2021 15:28:15 GMT",
"version": "v1"
}
] |
2021-10-08
|
[
[
"Leguy",
"Jules",
""
],
[
"Cauchy",
"Thomas",
""
],
[
"Duval",
"Beatrice",
""
],
[
"Da Mota",
"Benoit",
""
]
] |
AI-assisted molecular optimization is a very active research field as it is expected to provide the next-generation drugs and molecular materials. An important difficulty is that the properties to be optimized rely on costly evaluations. Machine learning methods are investigated with success to predict these properties, but show generalization issues on less known areas of the chemical space. We propose here a surrogate-based black box optimization method, to tackle jointly the optimization and machine learning problems. It consists in optimizing the expected improvement of the surrogate of a molecular property using an evolutionary algorithm. The surrogate is defined as a Gaussian Process Regression (GPR) model, learned on a relevant area of the search space with respect to the property to be optimized. We show that our approach can successfully optimize a costly property of interest much faster than a purely metaheuristic approach.
|
1610.08597
|
Sanjaya Wijeratne
|
Sanjaya Wijeratne, Lakshika Balasuriya, Derek Doran, Amit Sheth
|
Word Embeddings to Enhance Twitter Gang Member Profile Identification
|
7 pages, 1 figure, 2 tables, Published at IJCAI Workshop on Semantic
Machine Learning (SML 2016)
|
IJCAI Workshop on Semantic Machine Learning (SML 2016). pp. 18-24.
CEUR-WS, New York City, NY (07 2016)
| null | null |
cs.SI cs.CL cs.CY cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Gang affiliates have joined the masses who use social media to share thoughts
and actions publicly. Interestingly, they use this public medium to express
recent illegal actions, to intimidate others, and to share outrageous images
and statements. Agencies able to unearth these profiles may thus be able to
anticipate, stop, or hasten the investigation of gang-related crimes. This
paper investigates the use of word embeddings to help identify gang members on
Twitter. Building on our previous work, we generate word embeddings that
translate what Twitter users post in their profile descriptions, tweets,
profile images, and linked YouTube content to a real vector format amenable for
machine learning classification. Our experimental results show that pre-trained
word embeddings can boost the accuracy of supervised learning algorithms
trained over gang members social media posts.
|
[
{
"created": "Thu, 27 Oct 2016 03:21:49 GMT",
"version": "v1"
}
] |
2016-10-28
|
[
[
"Wijeratne",
"Sanjaya",
""
],
[
"Balasuriya",
"Lakshika",
""
],
[
"Doran",
"Derek",
""
],
[
"Sheth",
"Amit",
""
]
] |
Gang affiliates have joined the masses who use social media to share thoughts and actions publicly. Interestingly, they use this public medium to express recent illegal actions, to intimidate others, and to share outrageous images and statements. Agencies able to unearth these profiles may thus be able to anticipate, stop, or hasten the investigation of gang-related crimes. This paper investigates the use of word embeddings to help identify gang members on Twitter. Building on our previous work, we generate word embeddings that translate what Twitter users post in their profile descriptions, tweets, profile images, and linked YouTube content to a real vector format amenable for machine learning classification. Our experimental results show that pre-trained word embeddings can boost the accuracy of supervised learning algorithms trained over gang members social media posts.
|
1206.6030
|
Sundararajan Sellamanickam
|
Sundararajan Sellamanickam, Shirish Shevade
|
An Additive Model View to Sparse Gaussian Process Classifier Design
|
14 pages, 3 figures
| null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We consider the problem of designing a sparse Gaussian process classifier
(SGPC) that generalizes well. Viewing SGPC design as constructing an additive
model like in boosting, we present an efficient and effective SGPC design
method to perform a stage-wise optimization of a predictive loss function. We
introduce new methods for two key components viz., site parameter estimation
and basis vector selection in any SGPC design. The proposed adaptive sampling
based basis vector selection method aids in achieving improved generalization
performance at a reduced computational cost. This method can also be used in
conjunction with any other site parameter estimation methods. It has similar
computational and storage complexities as the well-known information vector
machine and is suitable for large datasets. The hyperparameters can be
determined by optimizing a predictive loss function. The experimental results
show better generalization performance of the proposed basis vector selection
method on several benchmark datasets, particularly for relatively smaller basis
vector set sizes or on difficult datasets.
|
[
{
"created": "Tue, 26 Jun 2012 15:58:21 GMT",
"version": "v1"
}
] |
2012-06-27
|
[
[
"Sellamanickam",
"Sundararajan",
""
],
[
"Shevade",
"Shirish",
""
]
] |
We consider the problem of designing a sparse Gaussian process classifier (SGPC) that generalizes well. Viewing SGPC design as constructing an additive model like in boosting, we present an efficient and effective SGPC design method to perform a stage-wise optimization of a predictive loss function. We introduce new methods for two key components viz., site parameter estimation and basis vector selection in any SGPC design. The proposed adaptive sampling based basis vector selection method aids in achieving improved generalization performance at a reduced computational cost. This method can also be used in conjunction with any other site parameter estimation methods. It has similar computational and storage complexities as the well-known information vector machine and is suitable for large datasets. The hyperparameters can be determined by optimizing a predictive loss function. The experimental results show better generalization performance of the proposed basis vector selection method on several benchmark datasets, particularly for relatively smaller basis vector set sizes or on difficult datasets.
|
2405.04095
|
Yiling He
|
Yiling He, Junchi Lei, Zhan Qin and Kui Ren
|
DREAM: Combating Concept Drift with Explanatory Detection and Adaptation
in Malware Classification
| null | null | null | null |
cs.CR cs.AI
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Deep learning-based malware classifiers face significant challenges due to
concept drift. The rapid evolution of malware, especially with new families,
can depress classification accuracy to near-random levels. Previous research
has primarily focused on detecting drift samples, relying on expert-led
analysis and labeling for model retraining. However, these methods often lack a
comprehensive understanding of malware concepts and provide limited guidance
for effective drift adaptation, leading to unstable detection performance and
high human labeling costs. To address these limitations, we introduce DREAM, a
novel system designed to surpass the capabilities of existing drift detectors
and to establish an explanatory drift adaptation process. DREAM enhances drift
detection through model sensitivity and data autonomy. The detector, trained in
a semi-supervised approach, proactively captures malware behavior concepts
through classifier feedback. During testing, it utilizes samples generated by
the detector itself, eliminating reliance on extensive training data. For drift
adaptation, DREAM enlarges human intervention, enabling revisions of malware
labels and concept explanations embedded within the detector's latent space. To
ensure a comprehensive response to concept drift, it facilitates a coordinated
update process for both the classifier and the detector. Our evaluation shows
that DREAM can effectively improve the drift detection accuracy and reduce the
expert analysis effort in adaptation across different malware datasets and
classifiers.
|
[
{
"created": "Tue, 7 May 2024 07:55:45 GMT",
"version": "v1"
},
{
"created": "Thu, 8 Aug 2024 05:45:56 GMT",
"version": "v2"
}
] |
2024-08-09
|
[
[
"He",
"Yiling",
""
],
[
"Lei",
"Junchi",
""
],
[
"Qin",
"Zhan",
""
],
[
"Ren",
"Kui",
""
]
] |
Deep learning-based malware classifiers face significant challenges due to concept drift. The rapid evolution of malware, especially with new families, can depress classification accuracy to near-random levels. Previous research has primarily focused on detecting drift samples, relying on expert-led analysis and labeling for model retraining. However, these methods often lack a comprehensive understanding of malware concepts and provide limited guidance for effective drift adaptation, leading to unstable detection performance and high human labeling costs. To address these limitations, we introduce DREAM, a novel system designed to surpass the capabilities of existing drift detectors and to establish an explanatory drift adaptation process. DREAM enhances drift detection through model sensitivity and data autonomy. The detector, trained in a semi-supervised approach, proactively captures malware behavior concepts through classifier feedback. During testing, it utilizes samples generated by the detector itself, eliminating reliance on extensive training data. For drift adaptation, DREAM enlarges human intervention, enabling revisions of malware labels and concept explanations embedded within the detector's latent space. To ensure a comprehensive response to concept drift, it facilitates a coordinated update process for both the classifier and the detector. Our evaluation shows that DREAM can effectively improve the drift detection accuracy and reduce the expert analysis effort in adaptation across different malware datasets and classifiers.
|
2012.02516
|
Patrick Esser
|
Patrick Esser and Robin Rombach and Bj\"orn Ommer
|
A Note on Data Biases in Generative Models
|
Extended Abstract for the NeurIPS 2020 Workshop on Machine Learning
for Creativity and Design
| null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
It is tempting to think that machines are less prone to unfairness and
prejudice. However, machine learning approaches compute their outputs based on
data. While biases can enter at any stage of the development pipeline, models
are particularly receptive to mirror biases of the datasets they are trained on
and therefore do not necessarily reflect truths about the world but, primarily,
truths about the data. To raise awareness about the relationship between modern
algorithms and the data that shape them, we use a conditional invertible neural
network to disentangle the dataset-specific information from the information
which is shared across different datasets. In this way, we can project the same
image onto different datasets, thereby revealing their inherent biases. We use
this methodology to (i) investigate the impact of dataset quality on the
performance of generative models, (ii) show how societal biases of datasets are
replicated by generative models, and (iii) present creative applications
through unpaired transfer between diverse datasets such as photographs, oil
portraits, and animes. Our code and an interactive demonstration are available
at https://github.com/CompVis/net2net .
|
[
{
"created": "Fri, 4 Dec 2020 10:46:37 GMT",
"version": "v1"
}
] |
2020-12-07
|
[
[
"Esser",
"Patrick",
""
],
[
"Rombach",
"Robin",
""
],
[
"Ommer",
"Björn",
""
]
] |
It is tempting to think that machines are less prone to unfairness and prejudice. However, machine learning approaches compute their outputs based on data. While biases can enter at any stage of the development pipeline, models are particularly receptive to mirror biases of the datasets they are trained on and therefore do not necessarily reflect truths about the world but, primarily, truths about the data. To raise awareness about the relationship between modern algorithms and the data that shape them, we use a conditional invertible neural network to disentangle the dataset-specific information from the information which is shared across different datasets. In this way, we can project the same image onto different datasets, thereby revealing their inherent biases. We use this methodology to (i) investigate the impact of dataset quality on the performance of generative models, (ii) show how societal biases of datasets are replicated by generative models, and (iii) present creative applications through unpaired transfer between diverse datasets such as photographs, oil portraits, and animes. Our code and an interactive demonstration are available at https://github.com/CompVis/net2net .
|
2001.08014
|
Hongyu Jin
|
Hongyu Jin and Mohammad Khodaei and Panos Papadimitratos
|
Security and Privacy in Vehicular Social Networks
|
A chapter for the book "Vehicular Social Networks"
|
A. M. Vegni, V. Loscr\`i, and A. V. Vasilakos, Eds. CRC Press,
Taylor & Francis Group, March 2017
| null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We surveyed and presented the state-of-the-art VC systems, security and
privacy architectures and technologies, emphasizing on security and privacy
challenges and their solutions for P2P interactions in VSNs towards
standardization and deployment. We note that beyond safety applications that
have drawn a lot of attention in VC systems, there is significant and rising
interest in vehicle-to-vehicle interaction for a range of transportation
efficiency and infotainment applications, notably LBS as well as a gamut of
services by mobile providers. While this enriches the VC systems and the user
experience, security and privacy concerns are also intensified. This is
especially so, considering (i) the privacy risk from the exposure of the users
to the service providers, and (ii) the security risk from the interaction with
malicious or selfish and thus misbehaving users or infrastructure. We showed
existing solutions can in fact evolve and address the VSN-specific challenges,
and improve or even accelerate the adoption of VSN applications.
|
[
{
"created": "Wed, 22 Jan 2020 13:49:23 GMT",
"version": "v1"
}
] |
2020-01-23
|
[
[
"Jin",
"Hongyu",
""
],
[
"Khodaei",
"Mohammad",
""
],
[
"Papadimitratos",
"Panos",
""
]
] |
We surveyed and presented the state-of-the-art VC systems, security and privacy architectures and technologies, emphasizing on security and privacy challenges and their solutions for P2P interactions in VSNs towards standardization and deployment. We note that beyond safety applications that have drawn a lot of attention in VC systems, there is significant and rising interest in vehicle-to-vehicle interaction for a range of transportation efficiency and infotainment applications, notably LBS as well as a gamut of services by mobile providers. While this enriches the VC systems and the user experience, security and privacy concerns are also intensified. This is especially so, considering (i) the privacy risk from the exposure of the users to the service providers, and (ii) the security risk from the interaction with malicious or selfish and thus misbehaving users or infrastructure. We showed existing solutions can in fact evolve and address the VSN-specific challenges, and improve or even accelerate the adoption of VSN applications.
|
2312.04590
|
Alexander Ziller
|
Alexander Ziller, Tamara T. Mueller, Simon Stieger, Leonhard Feiner,
Johannes Brandt, Rickmer Braren, Daniel Rueckert, Georgios Kaissis
|
Reconciling AI Performance and Data Reconstruction Resilience for
Medical Imaging
| null | null | null | null |
cs.CR cs.AI cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Artificial Intelligence (AI) models are vulnerable to information leakage of
their training data, which can be highly sensitive, for example in medical
imaging. Privacy Enhancing Technologies (PETs), such as Differential Privacy
(DP), aim to circumvent these susceptibilities. DP is the strongest possible
protection for training models while bounding the risks of inferring the
inclusion of training samples or reconstructing the original data. DP achieves
this by setting a quantifiable privacy budget. Although a lower budget
decreases the risk of information leakage, it typically also reduces the
performance of such models. This imposes a trade-off between robust performance
and stringent privacy. Additionally, the interpretation of a privacy budget
remains abstract and challenging to contextualize. In this study, we contrast
the performance of AI models at various privacy budgets against both,
theoretical risk bounds and empirical success of reconstruction attacks. We
show that using very large privacy budgets can render reconstruction attacks
impossible, while drops in performance are negligible. We thus conclude that
not using DP -- at all -- is negligent when applying AI models to sensitive
data. We deem those results to lie a foundation for further debates on striking
a balance between privacy risks and model performance.
|
[
{
"created": "Tue, 5 Dec 2023 12:21:30 GMT",
"version": "v1"
}
] |
2023-12-11
|
[
[
"Ziller",
"Alexander",
""
],
[
"Mueller",
"Tamara T.",
""
],
[
"Stieger",
"Simon",
""
],
[
"Feiner",
"Leonhard",
""
],
[
"Brandt",
"Johannes",
""
],
[
"Braren",
"Rickmer",
""
],
[
"Rueckert",
"Daniel",
""
],
[
"Kaissis",
"Georgios",
""
]
] |
Artificial Intelligence (AI) models are vulnerable to information leakage of their training data, which can be highly sensitive, for example in medical imaging. Privacy Enhancing Technologies (PETs), such as Differential Privacy (DP), aim to circumvent these susceptibilities. DP is the strongest possible protection for training models while bounding the risks of inferring the inclusion of training samples or reconstructing the original data. DP achieves this by setting a quantifiable privacy budget. Although a lower budget decreases the risk of information leakage, it typically also reduces the performance of such models. This imposes a trade-off between robust performance and stringent privacy. Additionally, the interpretation of a privacy budget remains abstract and challenging to contextualize. In this study, we contrast the performance of AI models at various privacy budgets against both, theoretical risk bounds and empirical success of reconstruction attacks. We show that using very large privacy budgets can render reconstruction attacks impossible, while drops in performance are negligible. We thus conclude that not using DP -- at all -- is negligent when applying AI models to sensitive data. We deem those results to lie a foundation for further debates on striking a balance between privacy risks and model performance.
|
2210.13024
|
Martin Lentschat
|
Puthineath Lay, Martin Lentschat and Cyril Labb\'e
|
Investigating the detection of Tortured Phrases in Scientific Literature
| null | null | null | null |
cs.CL cs.IR
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
With the help of online tools, unscrupulous authors can today generate a
pseudo-scientific article and attempt to publish it. Some of these tools work
by replacing or paraphrasing existing texts to produce new content, but they
have a tendency to generate nonsensical expressions. A recent study introduced
the concept of 'tortured phrase', an unexpected odd phrase that appears instead
of the fixed expression. E.g. counterfeit consciousness instead of artificial
intelligence. The present study aims at investigating how tortured phrases,
that are not yet listed, can be detected automatically. We conducted several
experiments, including non-neural binary classification, neural binary
classification and cosine similarity comparison of the phrase tokens, yielding
noticeable results.
|
[
{
"created": "Mon, 24 Oct 2022 08:15:22 GMT",
"version": "v1"
}
] |
2022-10-25
|
[
[
"Lay",
"Puthineath",
""
],
[
"Lentschat",
"Martin",
""
],
[
"Labbé",
"Cyril",
""
]
] |
With the help of online tools, unscrupulous authors can today generate a pseudo-scientific article and attempt to publish it. Some of these tools work by replacing or paraphrasing existing texts to produce new content, but they have a tendency to generate nonsensical expressions. A recent study introduced the concept of 'tortured phrase', an unexpected odd phrase that appears instead of the fixed expression. E.g. counterfeit consciousness instead of artificial intelligence. The present study aims at investigating how tortured phrases, that are not yet listed, can be detected automatically. We conducted several experiments, including non-neural binary classification, neural binary classification and cosine similarity comparison of the phrase tokens, yielding noticeable results.
|
1308.3136
|
Richard Preen
|
Richard J. Preen and Larry Bull
|
Toward the Coevolution of Novel Vertical-Axis Wind Turbines
|
appears in IEEE Transactions on Evolutionary Computation (2014).
arXiv admin note: substantial text overlap with arXiv:1212.5271,
arXiv:1204.4107
|
IEEE Transactions on Evolutionary Computation (2015),
19(2):284-294
|
10.1109/TEVC.2014.2316199
| null |
cs.NE cs.AI cs.CE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The production of renewable and sustainable energy is one of the most
important challenges currently facing mankind. Wind has made an increasing
contribution to the world's energy supply mix, but still remains a long way
from reaching its full potential. In this paper, we investigate the use of
artificial evolution to design vertical-axis wind turbine prototypes that are
physically instantiated and evaluated under fan generated wind conditions.
Initially a conventional evolutionary algorithm is used to explore the design
space of a single wind turbine and later a cooperative coevolutionary algorithm
is used to explore the design space of an array of wind turbines. Artificial
neural networks are used throughout as surrogate models to assist learning and
found to reduce the number of fabrications required to reach a higher
aerodynamic efficiency. Unlike in other approaches, such as computational fluid
dynamics simulations, no mathematical formulations are used and no model
assumptions are made.
|
[
{
"created": "Tue, 13 Aug 2013 14:02:34 GMT",
"version": "v1"
},
{
"created": "Thu, 15 Jan 2015 16:25:33 GMT",
"version": "v2"
}
] |
2015-07-02
|
[
[
"Preen",
"Richard J.",
""
],
[
"Bull",
"Larry",
""
]
] |
The production of renewable and sustainable energy is one of the most important challenges currently facing mankind. Wind has made an increasing contribution to the world's energy supply mix, but still remains a long way from reaching its full potential. In this paper, we investigate the use of artificial evolution to design vertical-axis wind turbine prototypes that are physically instantiated and evaluated under fan generated wind conditions. Initially a conventional evolutionary algorithm is used to explore the design space of a single wind turbine and later a cooperative coevolutionary algorithm is used to explore the design space of an array of wind turbines. Artificial neural networks are used throughout as surrogate models to assist learning and found to reduce the number of fabrications required to reach a higher aerodynamic efficiency. Unlike in other approaches, such as computational fluid dynamics simulations, no mathematical formulations are used and no model assumptions are made.
|
2208.11885
|
Melissa Swift
|
Melissa E. Swift (1 and 2), Wyatt Ayers (1), Sophie Pallanck (1),
Scott Wehrwein (1) ((1) Western Washington University, (2) Pacific Northwest
National Laboratory)
|
Visualizing the Passage of Time with Video Temporal Pyramids
|
11 pages, 9 figures, accepted for presentation at IEEE VIS 2022, will
be published in conference proceedings, supplementary material and more can
be found on our project page at
https://fw.cs.wwu.edu/~wehrwes/TemporalPyramids
| null |
10.1109/TVCG.2022.3209454
| null |
cs.CV eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
What can we learn about a scene by watching it for months or years? A video
recorded over a long timespan will depict interesting phenomena at multiple
timescales, but identifying and viewing them presents a challenge. The video is
too long to watch in full, and some occurrences are too slow to experience in
real-time, such as glacial retreat. Timelapse videography is a common approach
to summarizing long videos and visualizing slow timescales. However, a
timelapse is limited to a single chosen temporal frequency, and often appears
flickery due to aliasing and temporal discontinuities between frames. In this
paper, we propose Video Temporal Pyramids, a technique that addresses these
limitations and expands the possibilities for visualizing the passage of time.
Inspired by spatial image pyramids from computer vision, we developed an
algorithm that builds video pyramids in the temporal domain. Each level of a
Video Temporal Pyramid visualizes a different timescale; for instance, videos
from the monthly timescale are usually good for visualizing seasonal changes,
while videos from the one-minute timescale are best for visualizing sunrise or
the movement of clouds across the sky. To help explore the different pyramid
levels, we also propose a Video Spectrogram to visualize the amount of activity
across the entire pyramid, providing a holistic overview of the scene dynamics
and the ability to explore and discover phenomena across time and timescales.
To demonstrate our approach, we have built Video Temporal Pyramids from ten
outdoor scenes, each containing months or years of data. We compare Video
Temporal Pyramid layers to naive timelapse and find that our pyramids enable
alias-free viewing of longer-term changes. We also demonstrate that the Video
Spectrogram facilitates exploration and discovery of phenomena across pyramid
levels, by enabling both overview and detail-focused perspectives.
|
[
{
"created": "Thu, 25 Aug 2022 06:19:02 GMT",
"version": "v1"
}
] |
2023-05-02
|
[
[
"Swift",
"Melissa E.",
"",
"1 and 2"
],
[
"Ayers",
"Wyatt",
""
],
[
"Pallanck",
"Sophie",
""
],
[
"Wehrwein",
"Scott",
""
]
] |
What can we learn about a scene by watching it for months or years? A video recorded over a long timespan will depict interesting phenomena at multiple timescales, but identifying and viewing them presents a challenge. The video is too long to watch in full, and some occurrences are too slow to experience in real-time, such as glacial retreat. Timelapse videography is a common approach to summarizing long videos and visualizing slow timescales. However, a timelapse is limited to a single chosen temporal frequency, and often appears flickery due to aliasing and temporal discontinuities between frames. In this paper, we propose Video Temporal Pyramids, a technique that addresses these limitations and expands the possibilities for visualizing the passage of time. Inspired by spatial image pyramids from computer vision, we developed an algorithm that builds video pyramids in the temporal domain. Each level of a Video Temporal Pyramid visualizes a different timescale; for instance, videos from the monthly timescale are usually good for visualizing seasonal changes, while videos from the one-minute timescale are best for visualizing sunrise or the movement of clouds across the sky. To help explore the different pyramid levels, we also propose a Video Spectrogram to visualize the amount of activity across the entire pyramid, providing a holistic overview of the scene dynamics and the ability to explore and discover phenomena across time and timescales. To demonstrate our approach, we have built Video Temporal Pyramids from ten outdoor scenes, each containing months or years of data. We compare Video Temporal Pyramid layers to naive timelapse and find that our pyramids enable alias-free viewing of longer-term changes. We also demonstrate that the Video Spectrogram facilitates exploration and discovery of phenomena across pyramid levels, by enabling both overview and detail-focused perspectives.
|
1810.05716
|
Yanting Pei
|
Yanting Pei, Yaping Huang, Qi Zou, Yuhang Lu and Song Wang
|
Does Haze Removal Help CNN-based Image Classification?
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Hazy images are common in real scenarios and many dehazing methods have been
developed to automatically remove the haze from images. Typically, the goal of
image dehazing is to produce clearer images from which human vision can better
identify the object and structural details present in the images. When the
ground-truth haze-free image is available for a hazy image, quantitative
evaluation of image dehazing is usually based on objective metrics, such as
Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity (SSIM). However, in
many applications, large-scale images are collected not for visual examination
by human. Instead, they are used for many high-level vision tasks, such as
automatic classification, recognition and categorization. One fundamental
problem here is whether various dehazing methods can produce clearer images
that can help improve the performance of the high-level tasks. In this paper,
we empirically study this problem in the important task of image classification
by using both synthetic and real hazy image datasets. From the experimental
results, we find that the existing image-dehazing methods cannot improve much
the image-classification performance and sometimes even reduce the
image-classification performance.
|
[
{
"created": "Fri, 12 Oct 2018 20:46:29 GMT",
"version": "v1"
}
] |
2018-10-16
|
[
[
"Pei",
"Yanting",
""
],
[
"Huang",
"Yaping",
""
],
[
"Zou",
"Qi",
""
],
[
"Lu",
"Yuhang",
""
],
[
"Wang",
"Song",
""
]
] |
Hazy images are common in real scenarios and many dehazing methods have been developed to automatically remove the haze from images. Typically, the goal of image dehazing is to produce clearer images from which human vision can better identify the object and structural details present in the images. When the ground-truth haze-free image is available for a hazy image, quantitative evaluation of image dehazing is usually based on objective metrics, such as Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity (SSIM). However, in many applications, large-scale images are collected not for visual examination by human. Instead, they are used for many high-level vision tasks, such as automatic classification, recognition and categorization. One fundamental problem here is whether various dehazing methods can produce clearer images that can help improve the performance of the high-level tasks. In this paper, we empirically study this problem in the important task of image classification by using both synthetic and real hazy image datasets. From the experimental results, we find that the existing image-dehazing methods cannot improve much the image-classification performance and sometimes even reduce the image-classification performance.
|
1905.02025
|
Grigorios Kalliatakis
|
Grigorios Kalliatakis, Shoaib Ehsan, Maria Fasli, Klaus McDonald-Maier
|
DisplaceNet: Recognising Displaced People from Images by Exploiting
Dominance Level
|
To be published in CVPR Workshop on Computer Vision for Global
Challenges (CV4GC). arXiv admin note: substantial text overlap with
arXiv:1902.03817
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Every year millions of men, women and children are forced to leave their
homes and seek refuge from wars, human rights violations, persecution, and
natural disasters. The number of forcibly displaced people came at a record
rate of 44,400 every day throughout 2017, raising the cumulative total to 68.5
million at the years end, overtaken the total population of the United Kingdom.
Up to 85% of the forcibly displaced find refuge in low- and middle-income
countries, calling for increased humanitarian assistance worldwide. To reduce
the amount of manual labour required for human-rights-related image analysis,
we introduce DisplaceNet, a novel model which infers potential displaced people
from images by integrating the control level of the situation and conventional
convolutional neural network (CNN) classifier into one framework for image
classification. Experimental results show that DisplaceNet achieves up to 4%
coverage-the proportion of a data set for which a classifier is able to produce
a prediction-gain over the sole use of a CNN classifier. Our dataset, codes and
trained models will be available online at
https://github.com/GKalliatakis/DisplaceNet.
|
[
{
"created": "Fri, 3 May 2019 11:07:27 GMT",
"version": "v1"
}
] |
2019-05-07
|
[
[
"Kalliatakis",
"Grigorios",
""
],
[
"Ehsan",
"Shoaib",
""
],
[
"Fasli",
"Maria",
""
],
[
"McDonald-Maier",
"Klaus",
""
]
] |
Every year millions of men, women and children are forced to leave their homes and seek refuge from wars, human rights violations, persecution, and natural disasters. The number of forcibly displaced people came at a record rate of 44,400 every day throughout 2017, raising the cumulative total to 68.5 million at the years end, overtaken the total population of the United Kingdom. Up to 85% of the forcibly displaced find refuge in low- and middle-income countries, calling for increased humanitarian assistance worldwide. To reduce the amount of manual labour required for human-rights-related image analysis, we introduce DisplaceNet, a novel model which infers potential displaced people from images by integrating the control level of the situation and conventional convolutional neural network (CNN) classifier into one framework for image classification. Experimental results show that DisplaceNet achieves up to 4% coverage-the proportion of a data set for which a classifier is able to produce a prediction-gain over the sole use of a CNN classifier. Our dataset, codes and trained models will be available online at https://github.com/GKalliatakis/DisplaceNet.
|
2208.02885
|
Zilin Si
|
Zilin Si, Zirui Zhu, Arpit Agarwal, Stuart Anderson and Wenzhen Yuan
|
Grasp Stability Prediction with Sim-to-Real Transfer from Tactile
Sensing
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Robot simulation has been an essential tool for data-driven manipulation
tasks. However, most existing simulation frameworks lack either efficient and
accurate models of physical interactions with tactile sensors or realistic
tactile simulation. This makes the sim-to-real transfer for tactile-based
manipulation tasks still challenging. In this work, we integrate simulation of
robot dynamics and vision-based tactile sensors by modeling the physics of
contact. This contact model uses simulated contact forces at the robot's
end-effector to inform the generation of realistic tactile outputs. To
eliminate the sim-to-real transfer gap, we calibrate our physics simulator of
robot dynamics, contact model, and tactile optical simulator with real-world
data, and then we demonstrate the effectiveness of our system on a zero-shot
sim-to-real grasp stability prediction task where we achieve an average
accuracy of 90.7% on various objects. Experiments reveal the potential of
applying our simulation framework to more complicated manipulation tasks. We
open-source our simulation framework at
https://github.com/CMURoboTouch/Taxim/tree/taxim-robot.
|
[
{
"created": "Thu, 4 Aug 2022 20:55:09 GMT",
"version": "v1"
}
] |
2022-08-08
|
[
[
"Si",
"Zilin",
""
],
[
"Zhu",
"Zirui",
""
],
[
"Agarwal",
"Arpit",
""
],
[
"Anderson",
"Stuart",
""
],
[
"Yuan",
"Wenzhen",
""
]
] |
Robot simulation has been an essential tool for data-driven manipulation tasks. However, most existing simulation frameworks lack either efficient and accurate models of physical interactions with tactile sensors or realistic tactile simulation. This makes the sim-to-real transfer for tactile-based manipulation tasks still challenging. In this work, we integrate simulation of robot dynamics and vision-based tactile sensors by modeling the physics of contact. This contact model uses simulated contact forces at the robot's end-effector to inform the generation of realistic tactile outputs. To eliminate the sim-to-real transfer gap, we calibrate our physics simulator of robot dynamics, contact model, and tactile optical simulator with real-world data, and then we demonstrate the effectiveness of our system on a zero-shot sim-to-real grasp stability prediction task where we achieve an average accuracy of 90.7% on various objects. Experiments reveal the potential of applying our simulation framework to more complicated manipulation tasks. We open-source our simulation framework at https://github.com/CMURoboTouch/Taxim/tree/taxim-robot.
|
1907.02703
|
Yong Min
|
Yong Min, Tingjun Jiang, Cheng Jin, Qu Li and Xiaogang Jin
|
Intelligent social bots uncover the link between user preference and
diversity of news consumption
|
The refined manuscript is under review in Royal Society Open Science
|
Roy. Soc. Open Sci. 6 (2019) 190868
|
10.1098/rsos.190868
| null |
cs.SI physics.soc-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The boom of online social media and microblogging platforms has rapidly alter
the way we consume news and exchange opinions. Even though considerable efforts
try to recommend various contents to users, loss of information diversity and
the polarization of interest groups are still an enormous challenge for
industry and academia. Here, we take advantage of benign social bots to design
a controlled experiment on Weibo (the largest microblogging platform in China).
These software bots can exhibit human-like behavior (e.g., preferring
particular content) and simulate the formation of personal social networks and
news consumption under two well-accepted sociological hypotheses (i.e.,
homophily and triadic closure). We deployed 68 bots to Weibo, and each bot ran
for at least 2 months and followed 100 to 120 accounts. In total, we observed
5,318 users and recorded about 630,000 messages exposed to these bots. Our
results show, even with the same selection behaviors, bots preferring
entertainment content are more likely to form polarized communities with their
peers, in which about 80\% of the information they consume is of the same type,
which is a significant difference for bots preferring sci-tech content. The
result suggests that users preference played a more crucial role in limiting
themselves access to diverse content by compared with the two well-known
drivers (self-selection and pre-selection). Furthermore, our results reveal an
ingenious connection between specific content and its propagating
sub-structures in the same social network. In the Weibo network, entertainment
news favors a unidirectional star-like sub-structure, while sci-tech news
spreads on a bidirectional clustering sub-structure. This connection can
amplify the diversity effect of user preference. The discovery may have
important implications for diffusion dynamics study and recommendation system
design.
|
[
{
"created": "Fri, 5 Jul 2019 07:20:48 GMT",
"version": "v1"
}
] |
2020-03-05
|
[
[
"Min",
"Yong",
""
],
[
"Jiang",
"Tingjun",
""
],
[
"Jin",
"Cheng",
""
],
[
"Li",
"Qu",
""
],
[
"Jin",
"Xiaogang",
""
]
] |
The boom of online social media and microblogging platforms has rapidly alter the way we consume news and exchange opinions. Even though considerable efforts try to recommend various contents to users, loss of information diversity and the polarization of interest groups are still an enormous challenge for industry and academia. Here, we take advantage of benign social bots to design a controlled experiment on Weibo (the largest microblogging platform in China). These software bots can exhibit human-like behavior (e.g., preferring particular content) and simulate the formation of personal social networks and news consumption under two well-accepted sociological hypotheses (i.e., homophily and triadic closure). We deployed 68 bots to Weibo, and each bot ran for at least 2 months and followed 100 to 120 accounts. In total, we observed 5,318 users and recorded about 630,000 messages exposed to these bots. Our results show, even with the same selection behaviors, bots preferring entertainment content are more likely to form polarized communities with their peers, in which about 80\% of the information they consume is of the same type, which is a significant difference for bots preferring sci-tech content. The result suggests that users preference played a more crucial role in limiting themselves access to diverse content by compared with the two well-known drivers (self-selection and pre-selection). Furthermore, our results reveal an ingenious connection between specific content and its propagating sub-structures in the same social network. In the Weibo network, entertainment news favors a unidirectional star-like sub-structure, while sci-tech news spreads on a bidirectional clustering sub-structure. This connection can amplify the diversity effect of user preference. The discovery may have important implications for diffusion dynamics study and recommendation system design.
|
1906.00284
|
Ehsan Aryafar
|
Ehsan Aryafar, Alireza Keshavarz-Haddad, Carlee Joe-Wong
|
Proportional Fair RAT Aggregation in HetNets
|
Extended version of the 31st International Teletraffic Congress (ITC
2019) conference paper
| null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Heterogeneity in wireless network architectures (i.e., the coexistence of 3G,
LTE, 5G, WiFi, etc.) has become a key component of current and future
generation cellular networks. Simultaneous aggregation of each client's traffic
across multiple such radio access technologies (RATs) / base stations (BSs) can
significantly increase the system throughput, and has become an important
feature of cellular standards on multi-RAT integration. Distributed algorithms
that can realize the full potential of this aggregation are thus of great
importance to operators. In this paper, we study the problem of resource
allocation for multi-RAT traffic aggregation in HetNets (heterogeneous
networks). Our goal is to ensure that the resources at each BS are allocated so
that the aggregate throughput achieved by each client across its RATs satisfies
a proportional fairness (PF) criterion. In particular, we provide a simple
distributed algorithm for resource allocation at each BS that extends the PF
allocation algorithm for a single BS. Despite its simplicity and lack of
coordination across the BSs, we show that our algorithm converges to the
desired PF solution and provide (tight) bounds on its convergence speed. We
also study the characteristics of the optimal solution and use its properties
to prove the optimality of our algorithm's outcomes.
|
[
{
"created": "Sat, 1 Jun 2019 20:22:14 GMT",
"version": "v1"
}
] |
2019-06-04
|
[
[
"Aryafar",
"Ehsan",
""
],
[
"Keshavarz-Haddad",
"Alireza",
""
],
[
"Joe-Wong",
"Carlee",
""
]
] |
Heterogeneity in wireless network architectures (i.e., the coexistence of 3G, LTE, 5G, WiFi, etc.) has become a key component of current and future generation cellular networks. Simultaneous aggregation of each client's traffic across multiple such radio access technologies (RATs) / base stations (BSs) can significantly increase the system throughput, and has become an important feature of cellular standards on multi-RAT integration. Distributed algorithms that can realize the full potential of this aggregation are thus of great importance to operators. In this paper, we study the problem of resource allocation for multi-RAT traffic aggregation in HetNets (heterogeneous networks). Our goal is to ensure that the resources at each BS are allocated so that the aggregate throughput achieved by each client across its RATs satisfies a proportional fairness (PF) criterion. In particular, we provide a simple distributed algorithm for resource allocation at each BS that extends the PF allocation algorithm for a single BS. Despite its simplicity and lack of coordination across the BSs, we show that our algorithm converges to the desired PF solution and provide (tight) bounds on its convergence speed. We also study the characteristics of the optimal solution and use its properties to prove the optimality of our algorithm's outcomes.
|
2101.06409
|
Ashish Kumar
|
Ashish Kumar, L. Behera
|
Shape Back-Projection In 3D Scenes
|
7 pages, 7 figures, 3 tables
| null | null | null |
cs.CV cs.RO
|
http://creativecommons.org/licenses/by-sa/4.0/
|
In this work, we propose a novel framework shape back-projection for
computationally efficient point cloud processing in a probabilistic manner. The
primary component of the technique is shape histogram and a back-projection
procedure. The technique measures similarity between 3D surfaces, by analyzing
their geometrical properties. It is analogous to color back-projection which
measures similarity between images, simply by looking at their color
distributions. In the overall process, first, shape histogram of a sample
surface (e.g. planar) is computed, which captures the profile of surface
normals around a point in form of a probability distribution. Later, the
histogram is back-projected onto a test surface and a likelihood score is
obtained. The score depicts that how likely a point in the test surface behaves
similar to the sample surface, geometrically. Shape back-projection finds its
application in binary surface classification, high curvature edge detection in
unorganized point cloud, automated point cloud labeling for 3D-CNNs
(convolutional neural network) etc. The algorithm can also be used for
real-time robotic operations such as autonomous object picking in warehouse
automation, ground plane extraction for autonomous vehicles and can be deployed
easily on computationally limited platforms (UAVs).
|
[
{
"created": "Sat, 16 Jan 2021 09:00:34 GMT",
"version": "v1"
}
] |
2021-01-19
|
[
[
"Kumar",
"Ashish",
""
],
[
"Behera",
"L.",
""
]
] |
In this work, we propose a novel framework shape back-projection for computationally efficient point cloud processing in a probabilistic manner. The primary component of the technique is shape histogram and a back-projection procedure. The technique measures similarity between 3D surfaces, by analyzing their geometrical properties. It is analogous to color back-projection which measures similarity between images, simply by looking at their color distributions. In the overall process, first, shape histogram of a sample surface (e.g. planar) is computed, which captures the profile of surface normals around a point in form of a probability distribution. Later, the histogram is back-projected onto a test surface and a likelihood score is obtained. The score depicts that how likely a point in the test surface behaves similar to the sample surface, geometrically. Shape back-projection finds its application in binary surface classification, high curvature edge detection in unorganized point cloud, automated point cloud labeling for 3D-CNNs (convolutional neural network) etc. The algorithm can also be used for real-time robotic operations such as autonomous object picking in warehouse automation, ground plane extraction for autonomous vehicles and can be deployed easily on computationally limited platforms (UAVs).
|
2205.14280
|
Li Niu
|
Li Niu, Qingyang Liu, Zhenchen Liu, Jiangtong Li
|
Fast Object Placement Assessment
| null | null | null | null |
cs.CV
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Object placement assessment (OPA) aims to predict the rationality score of a
composite image in terms of the placement (e.g., scale, location) of inserted
foreground object. However, given a pair of scaled foreground and background,
to enumerate all the reasonable locations, existing OPA model needs to place
the foreground at each location on the background and pass the obtained
composite image through the model one at a time, which is very time-consuming.
In this work, we investigate a new task named as fast OPA. Specifically,
provided with a scaled foreground and a background, we only pass them through
the model once and predict the rationality scores for all locations. To
accomplish this task, we propose a pioneering fast OPA model with several
innovations (i.e., foreground dynamic filter, background prior transfer, and
composite feature mimicking) to bridge the performance gap between slow OPA
model and fast OPA model. Extensive experiments on OPA dataset show that our
proposed fast OPA model performs on par with slow OPA model but runs
significantly faster.
|
[
{
"created": "Sat, 28 May 2022 00:28:32 GMT",
"version": "v1"
}
] |
2022-05-31
|
[
[
"Niu",
"Li",
""
],
[
"Liu",
"Qingyang",
""
],
[
"Liu",
"Zhenchen",
""
],
[
"Li",
"Jiangtong",
""
]
] |
Object placement assessment (OPA) aims to predict the rationality score of a composite image in terms of the placement (e.g., scale, location) of inserted foreground object. However, given a pair of scaled foreground and background, to enumerate all the reasonable locations, existing OPA model needs to place the foreground at each location on the background and pass the obtained composite image through the model one at a time, which is very time-consuming. In this work, we investigate a new task named as fast OPA. Specifically, provided with a scaled foreground and a background, we only pass them through the model once and predict the rationality scores for all locations. To accomplish this task, we propose a pioneering fast OPA model with several innovations (i.e., foreground dynamic filter, background prior transfer, and composite feature mimicking) to bridge the performance gap between slow OPA model and fast OPA model. Extensive experiments on OPA dataset show that our proposed fast OPA model performs on par with slow OPA model but runs significantly faster.
|
2405.06652
|
Yuhongmo Mo
|
Yuhong Mo, Hao Qin, Yushan Dong, Ziyi Zhu, Zhenglin Li
|
Large Language Model (LLM) AI text generation detection based on
transformer deep learning algorithm
|
6 pages
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, a tool for detecting LLM AI text generation is developed based
on the Transformer model, aiming to improve the accuracy of AI text generation
detection and provide reference for subsequent research. Firstly the text is
Unicode normalised, converted to lowercase form, characters other than
non-alphabetic characters and punctuation marks are removed by regular
expressions, spaces are added around punctuation marks, first and last spaces
are removed, consecutive ellipses are replaced with single spaces and the text
is connected using the specified delimiter. Next remove non-alphabetic
characters and extra whitespace characters, replace multiple consecutive
whitespace characters with a single space and again convert to lowercase form.
The deep learning model combines layers such as LSTM, Transformer and CNN for
text classification or sequence labelling tasks. The training and validation
sets show that the model loss decreases from 0.127 to 0.005 and accuracy
increases from 94.96 to 99.8, indicating that the model has good detection and
classification ability for AI generated text. The test set confusion matrix and
accuracy show that the model has 99% prediction accuracy for AI-generated text,
with a precision of 0.99, a recall of 1, and an f1 score of 0.99, achieving a
very high classification accuracy. Looking forward, it has the prospect of wide
application in the field of AI text detection.
|
[
{
"created": "Sat, 6 Apr 2024 06:22:45 GMT",
"version": "v1"
}
] |
2024-05-14
|
[
[
"Mo",
"Yuhong",
""
],
[
"Qin",
"Hao",
""
],
[
"Dong",
"Yushan",
""
],
[
"Zhu",
"Ziyi",
""
],
[
"Li",
"Zhenglin",
""
]
] |
In this paper, a tool for detecting LLM AI text generation is developed based on the Transformer model, aiming to improve the accuracy of AI text generation detection and provide reference for subsequent research. Firstly the text is Unicode normalised, converted to lowercase form, characters other than non-alphabetic characters and punctuation marks are removed by regular expressions, spaces are added around punctuation marks, first and last spaces are removed, consecutive ellipses are replaced with single spaces and the text is connected using the specified delimiter. Next remove non-alphabetic characters and extra whitespace characters, replace multiple consecutive whitespace characters with a single space and again convert to lowercase form. The deep learning model combines layers such as LSTM, Transformer and CNN for text classification or sequence labelling tasks. The training and validation sets show that the model loss decreases from 0.127 to 0.005 and accuracy increases from 94.96 to 99.8, indicating that the model has good detection and classification ability for AI generated text. The test set confusion matrix and accuracy show that the model has 99% prediction accuracy for AI-generated text, with a precision of 0.99, a recall of 1, and an f1 score of 0.99, achieving a very high classification accuracy. Looking forward, it has the prospect of wide application in the field of AI text detection.
|
2112.01330
|
Moein Sorkhei
|
Moein Sorkhei, Yue Liu, Hossein Azizpour, Edward Azavedo, Karin
Dembrower, Dimitra Ntoula, Athanasios Zouzos, Fredrik Strand, Kevin Smith
|
CSAW-M: An Ordinal Classification Dataset for Benchmarking Mammographic
Masking of Cancer
|
35th Conference on Neural Information Processing Systems (NeurIPS
2021) Track on Datasets and Benchmarks
| null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Interval and large invasive breast cancers, which are associated with worse
prognosis than other cancers, are usually detected at a late stage due to false
negative assessments of screening mammograms. The missed screening-time
detection is commonly caused by the tumor being obscured by its surrounding
breast tissues, a phenomenon called masking. To study and benchmark
mammographic masking of cancer, in this work we introduce CSAW-M, the largest
public mammographic dataset, collected from over 10,000 individuals and
annotated with potential masking. In contrast to the previous approaches which
measure breast image density as a proxy, our dataset directly provides
annotations of masking potential assessments from five specialists. We also
trained deep learning models on CSAW-M to estimate the masking level and showed
that the estimated masking is significantly more predictive of screening
participants diagnosed with interval and large invasive cancers -- without
being explicitly trained for these tasks -- than its breast density
counterparts.
|
[
{
"created": "Thu, 2 Dec 2021 15:31:51 GMT",
"version": "v1"
}
] |
2021-12-03
|
[
[
"Sorkhei",
"Moein",
""
],
[
"Liu",
"Yue",
""
],
[
"Azizpour",
"Hossein",
""
],
[
"Azavedo",
"Edward",
""
],
[
"Dembrower",
"Karin",
""
],
[
"Ntoula",
"Dimitra",
""
],
[
"Zouzos",
"Athanasios",
""
],
[
"Strand",
"Fredrik",
""
],
[
"Smith",
"Kevin",
""
]
] |
Interval and large invasive breast cancers, which are associated with worse prognosis than other cancers, are usually detected at a late stage due to false negative assessments of screening mammograms. The missed screening-time detection is commonly caused by the tumor being obscured by its surrounding breast tissues, a phenomenon called masking. To study and benchmark mammographic masking of cancer, in this work we introduce CSAW-M, the largest public mammographic dataset, collected from over 10,000 individuals and annotated with potential masking. In contrast to the previous approaches which measure breast image density as a proxy, our dataset directly provides annotations of masking potential assessments from five specialists. We also trained deep learning models on CSAW-M to estimate the masking level and showed that the estimated masking is significantly more predictive of screening participants diagnosed with interval and large invasive cancers -- without being explicitly trained for these tasks -- than its breast density counterparts.
|
2311.07761
|
Maximilian Luz
|
Maximilian Luz, Rohit Mohan, Ahmed Rida Sekkat, Oliver Sawade, Elmar
Matthes, Thomas Brox, Abhinav Valada
|
Amodal Optical Flow
| null | null | null | null |
cs.CV cs.AI cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Optical flow estimation is very challenging in situations with transparent or
occluded objects. In this work, we address these challenges at the task level
by introducing Amodal Optical Flow, which integrates optical flow with amodal
perception. Instead of only representing the visible regions, we define amodal
optical flow as a multi-layered pixel-level motion field that encompasses both
visible and occluded regions of the scene. To facilitate research on this new
task, we extend the AmodalSynthDrive dataset to include pixel-level labels for
amodal optical flow estimation. We present several strong baselines, along with
the Amodal Flow Quality metric to quantify the performance in an interpretable
manner. Furthermore, we propose the novel AmodalFlowNet as an initial step
toward addressing this task. AmodalFlowNet consists of a transformer-based
cost-volume encoder paired with a recurrent transformer decoder which
facilitates recurrent hierarchical feature propagation and amodal semantic
grounding. We demonstrate the tractability of amodal optical flow in extensive
experiments and show its utility for downstream tasks such as panoptic
tracking. We make the dataset, code, and trained models publicly available at
http://amodal-flow.cs.uni-freiburg.de.
|
[
{
"created": "Mon, 13 Nov 2023 21:21:43 GMT",
"version": "v1"
},
{
"created": "Tue, 7 May 2024 17:36:29 GMT",
"version": "v2"
}
] |
2024-05-08
|
[
[
"Luz",
"Maximilian",
""
],
[
"Mohan",
"Rohit",
""
],
[
"Sekkat",
"Ahmed Rida",
""
],
[
"Sawade",
"Oliver",
""
],
[
"Matthes",
"Elmar",
""
],
[
"Brox",
"Thomas",
""
],
[
"Valada",
"Abhinav",
""
]
] |
Optical flow estimation is very challenging in situations with transparent or occluded objects. In this work, we address these challenges at the task level by introducing Amodal Optical Flow, which integrates optical flow with amodal perception. Instead of only representing the visible regions, we define amodal optical flow as a multi-layered pixel-level motion field that encompasses both visible and occluded regions of the scene. To facilitate research on this new task, we extend the AmodalSynthDrive dataset to include pixel-level labels for amodal optical flow estimation. We present several strong baselines, along with the Amodal Flow Quality metric to quantify the performance in an interpretable manner. Furthermore, we propose the novel AmodalFlowNet as an initial step toward addressing this task. AmodalFlowNet consists of a transformer-based cost-volume encoder paired with a recurrent transformer decoder which facilitates recurrent hierarchical feature propagation and amodal semantic grounding. We demonstrate the tractability of amodal optical flow in extensive experiments and show its utility for downstream tasks such as panoptic tracking. We make the dataset, code, and trained models publicly available at http://amodal-flow.cs.uni-freiburg.de.
|
2102.11057
|
Guillaume Jaume
|
Pushpak Pati and Guillaume Jaume and Antonio Foncubierta and Florinda
Feroce and Anna Maria Anniciello and Giosu\`e Scognamiglio and Nadia Brancati
and Maryse Fiche and Estelle Dubruc and Daniel Riccio and Maurizio Di Bonito
and Giuseppe De Pietro and Gerardo Botti and Jean-Philippe Thiran and Maria
Frucci and Orcun Goksel and Maria Gabrani
|
Hierarchical Graph Representations in Digital Pathology
| null | null | null | null |
cs.CV
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Cancer diagnosis, prognosis, and therapy response predictions from tissue
specimens highly depend on the phenotype and topological distribution of
constituting histological entities. Thus, adequate tissue representations for
encoding histological entities is imperative for computer aided cancer patient
care. To this end, several approaches have leveraged cell-graphs that encode
cell morphology and organization to denote the tissue information. These allow
for utilizing machine learning to map tissue representations to tissue
functionality to help quantify their relationship. Though cellular information
is crucial, it is incomplete alone to comprehensively characterize complex
tissue structure. We herein treat the tissue as a hierarchical composition of
multiple types of histological entities from fine to coarse level, capturing
multivariate tissue information at multiple levels. We propose a novel
multi-level hierarchical entity-graph representation of tissue specimens to
model hierarchical compositions that encode histological entities as well as
their intra- and inter-entity level interactions. Subsequently, a graph neural
network is proposed to operate on the hierarchical entity-graph representation
to map the tissue structure to tissue functionality. Specifically, for input
histology images we utilize well-defined cells and tissue regions to build
HierArchical Cell-to-Tissue (HACT) graph representations, and devise HACT-Net,
a graph neural network, to classify such HACT representations. As part of this
work, we introduce the BReAst Carcinoma Subtyping (BRACS) dataset, a large
cohort of H&E stained breast tumor images, to evaluate our proposed methodology
against pathologists and state-of-the-art approaches. Through comparative
assessment and ablation studies, our method is demonstrated to yield superior
classification results compared to alternative methods as well as pathologists.
|
[
{
"created": "Mon, 22 Feb 2021 14:30:57 GMT",
"version": "v1"
},
{
"created": "Wed, 17 Mar 2021 09:09:02 GMT",
"version": "v2"
}
] |
2021-03-18
|
[
[
"Pati",
"Pushpak",
""
],
[
"Jaume",
"Guillaume",
""
],
[
"Foncubierta",
"Antonio",
""
],
[
"Feroce",
"Florinda",
""
],
[
"Anniciello",
"Anna Maria",
""
],
[
"Scognamiglio",
"Giosuè",
""
],
[
"Brancati",
"Nadia",
""
],
[
"Fiche",
"Maryse",
""
],
[
"Dubruc",
"Estelle",
""
],
[
"Riccio",
"Daniel",
""
],
[
"Di Bonito",
"Maurizio",
""
],
[
"De Pietro",
"Giuseppe",
""
],
[
"Botti",
"Gerardo",
""
],
[
"Thiran",
"Jean-Philippe",
""
],
[
"Frucci",
"Maria",
""
],
[
"Goksel",
"Orcun",
""
],
[
"Gabrani",
"Maria",
""
]
] |
Cancer diagnosis, prognosis, and therapy response predictions from tissue specimens highly depend on the phenotype and topological distribution of constituting histological entities. Thus, adequate tissue representations for encoding histological entities is imperative for computer aided cancer patient care. To this end, several approaches have leveraged cell-graphs that encode cell morphology and organization to denote the tissue information. These allow for utilizing machine learning to map tissue representations to tissue functionality to help quantify their relationship. Though cellular information is crucial, it is incomplete alone to comprehensively characterize complex tissue structure. We herein treat the tissue as a hierarchical composition of multiple types of histological entities from fine to coarse level, capturing multivariate tissue information at multiple levels. We propose a novel multi-level hierarchical entity-graph representation of tissue specimens to model hierarchical compositions that encode histological entities as well as their intra- and inter-entity level interactions. Subsequently, a graph neural network is proposed to operate on the hierarchical entity-graph representation to map the tissue structure to tissue functionality. Specifically, for input histology images we utilize well-defined cells and tissue regions to build HierArchical Cell-to-Tissue (HACT) graph representations, and devise HACT-Net, a graph neural network, to classify such HACT representations. As part of this work, we introduce the BReAst Carcinoma Subtyping (BRACS) dataset, a large cohort of H&E stained breast tumor images, to evaluate our proposed methodology against pathologists and state-of-the-art approaches. Through comparative assessment and ablation studies, our method is demonstrated to yield superior classification results compared to alternative methods as well as pathologists.
|
cs/0507029
|
Samuel Landau
|
Samuel Landau (INRIA Futurs), Olivier Sigaud (LIP6), Marc Schoenauer
(INRIA Futurs)
|
ATNoSFERES revisited
| null |
Dans Proceedings of the Genetic and Evolutionary Computation
Conference, GECCO-2005 [OAI: oai:hal.inria.fr:inria-00000158_v1] -
http://hal.inria.fr/inria-00000158
| null | null |
cs.AI
| null |
ATNoSFERES is a Pittsburgh style Learning Classifier System (LCS) in which
the rules are represented as edges of an Augmented Transition Network.
Genotypes are strings of tokens of a stack-based language, whose execution
builds the labeled graph. The original ATNoSFERES, using a bitstring to
represent the language tokens, has been favorably compared in previous work to
several Michigan style LCSs architectures in the context of Non Markov
problems. Several modifications of ATNoSFERES are proposed here: the most
important one conceptually being a representational change: each token is now
represented by an integer, hence the genotype is a string of integers; several
other modifications of the underlying grammar language are also proposed. The
resulting ATNoSFERES-II is validated on several standard animat Non Markov
problems, on which it outperforms all previously published results in the LCS
literature. The reasons for these improvement are carefully analyzed, and some
assumptions are proposed on the underlying mechanisms in order to explain these
good results.
|
[
{
"created": "Mon, 11 Jul 2005 13:11:25 GMT",
"version": "v1"
}
] |
2019-05-01
|
[
[
"Landau",
"Samuel",
"",
"INRIA Futurs"
],
[
"Sigaud",
"Olivier",
"",
"LIP6"
],
[
"Schoenauer",
"Marc",
"",
"INRIA Futurs"
]
] |
ATNoSFERES is a Pittsburgh style Learning Classifier System (LCS) in which the rules are represented as edges of an Augmented Transition Network. Genotypes are strings of tokens of a stack-based language, whose execution builds the labeled graph. The original ATNoSFERES, using a bitstring to represent the language tokens, has been favorably compared in previous work to several Michigan style LCSs architectures in the context of Non Markov problems. Several modifications of ATNoSFERES are proposed here: the most important one conceptually being a representational change: each token is now represented by an integer, hence the genotype is a string of integers; several other modifications of the underlying grammar language are also proposed. The resulting ATNoSFERES-II is validated on several standard animat Non Markov problems, on which it outperforms all previously published results in the LCS literature. The reasons for these improvement are carefully analyzed, and some assumptions are proposed on the underlying mechanisms in order to explain these good results.
|
2407.13594
|
Nils Palumbo
|
Nils Palumbo, Ravi Mangal, Zifan Wang, Saranya Vijayakumar, Corina S.
Pasareanu, Somesh Jha
|
Mechanistically Interpreting a Transformer-based 2-SAT Solver: An
Axiomatic Approach
| null | null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Mechanistic interpretability aims to reverse engineer the computation
performed by a neural network in terms of its internal components. Although
there is a growing body of research on mechanistic interpretation of neural
networks, the notion of a mechanistic interpretation itself is often ad-hoc.
Inspired by the notion of abstract interpretation from the program analysis
literature that aims to develop approximate semantics for programs, we give a
set of axioms that formally characterize a mechanistic interpretation as a
description that approximately captures the semantics of the neural network
under analysis in a compositional manner. We use these axioms to guide the
mechanistic interpretability analysis of a Transformer-based model trained to
solve the well-known 2-SAT problem. We are able to reverse engineer the
algorithm learned by the model -- the model first parses the input formulas and
then evaluates their satisfiability via enumeration of different possible
valuations of the Boolean input variables. We also present evidence to support
that the mechanistic interpretation of the analyzed model indeed satisfies the
stated axioms.
|
[
{
"created": "Thu, 18 Jul 2024 15:32:44 GMT",
"version": "v1"
}
] |
2024-07-19
|
[
[
"Palumbo",
"Nils",
""
],
[
"Mangal",
"Ravi",
""
],
[
"Wang",
"Zifan",
""
],
[
"Vijayakumar",
"Saranya",
""
],
[
"Pasareanu",
"Corina S.",
""
],
[
"Jha",
"Somesh",
""
]
] |
Mechanistic interpretability aims to reverse engineer the computation performed by a neural network in terms of its internal components. Although there is a growing body of research on mechanistic interpretation of neural networks, the notion of a mechanistic interpretation itself is often ad-hoc. Inspired by the notion of abstract interpretation from the program analysis literature that aims to develop approximate semantics for programs, we give a set of axioms that formally characterize a mechanistic interpretation as a description that approximately captures the semantics of the neural network under analysis in a compositional manner. We use these axioms to guide the mechanistic interpretability analysis of a Transformer-based model trained to solve the well-known 2-SAT problem. We are able to reverse engineer the algorithm learned by the model -- the model first parses the input formulas and then evaluates their satisfiability via enumeration of different possible valuations of the Boolean input variables. We also present evidence to support that the mechanistic interpretation of the analyzed model indeed satisfies the stated axioms.
|
1905.09958
|
Ren\'ee Burton
|
Ren\'ee Burton
|
Characterizing Certain DNS DDoS Attacks
|
25 pages, 21 figures
| null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper details data science research in the area of Cyber Threat
Intelligence applied to a specific type of Distributed Denial of Service (DDoS)
attack. We study a DDoS technique prevalent in the Domain Name System (DNS) for
which little malware have been recovered. Using data from a globally
distributed set of a passive collectors (pDNS), we create a statistical
classifier to identify these attacks and then use unsupervised learning to
investigate the attack events and the malware that generates them. The first
known major study of this technique, we discovered that current attacks have
little resemblance to published descriptions and identify several previously
unpublished features of the attacks. Through a combination of text and time
series features, we are able to characterize the dominant malware and
demonstrate that the number of global-scale attack systems is relatively small.
|
[
{
"created": "Thu, 23 May 2019 22:38:11 GMT",
"version": "v1"
},
{
"created": "Sat, 20 Jul 2019 16:21:23 GMT",
"version": "v2"
}
] |
2019-07-23
|
[
[
"Burton",
"Renée",
""
]
] |
This paper details data science research in the area of Cyber Threat Intelligence applied to a specific type of Distributed Denial of Service (DDoS) attack. We study a DDoS technique prevalent in the Domain Name System (DNS) for which little malware have been recovered. Using data from a globally distributed set of a passive collectors (pDNS), we create a statistical classifier to identify these attacks and then use unsupervised learning to investigate the attack events and the malware that generates them. The first known major study of this technique, we discovered that current attacks have little resemblance to published descriptions and identify several previously unpublished features of the attacks. Through a combination of text and time series features, we are able to characterize the dominant malware and demonstrate that the number of global-scale attack systems is relatively small.
|
2309.15418
|
Hengchang Hu
|
Hengchang Hu, Yiming Cao, Zhankui He, Samson Tan, Min-Yen Kan
|
Automatic Feature Fairness in Recommendation via Adversaries
|
SIGIR-AP'23
| null |
10.1145/3624918.3625318
| null |
cs.IR cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Fairness is a widely discussed topic in recommender systems, but its
practical implementation faces challenges in defining sensitive features while
maintaining recommendation accuracy. We propose feature fairness as the
foundation to achieve equitable treatment across diverse groups defined by
various feature combinations. This improves overall accuracy through balanced
feature generalizability. We introduce unbiased feature learning through
adversarial training, using adversarial perturbation to enhance feature
representation. The adversaries improve model generalization for
under-represented features. We adapt adversaries automatically based on two
forms of feature biases: frequency and combination variety of feature values.
This allows us to dynamically adjust perturbation strengths and adversarial
training weights. Stronger perturbations are applied to feature values with
fewer combination varieties to improve generalization, while higher weights for
low-frequency features address training imbalances. We leverage the Adaptive
Adversarial perturbation based on the widely-applied Factorization Machine
(AAFM) as our backbone model. In experiments, AAFM surpasses strong baselines
in both fairness and accuracy measures. AAFM excels in providing item- and
user-fairness for single- and multi-feature tasks, showcasing their versatility
and scalability. To maintain good accuracy, we find that adversarial
perturbation must be well-managed: during training, perturbations should not
overly persist and their strengths should decay.
|
[
{
"created": "Wed, 27 Sep 2023 05:48:05 GMT",
"version": "v1"
}
] |
2023-09-28
|
[
[
"Hu",
"Hengchang",
""
],
[
"Cao",
"Yiming",
""
],
[
"He",
"Zhankui",
""
],
[
"Tan",
"Samson",
""
],
[
"Kan",
"Min-Yen",
""
]
] |
Fairness is a widely discussed topic in recommender systems, but its practical implementation faces challenges in defining sensitive features while maintaining recommendation accuracy. We propose feature fairness as the foundation to achieve equitable treatment across diverse groups defined by various feature combinations. This improves overall accuracy through balanced feature generalizability. We introduce unbiased feature learning through adversarial training, using adversarial perturbation to enhance feature representation. The adversaries improve model generalization for under-represented features. We adapt adversaries automatically based on two forms of feature biases: frequency and combination variety of feature values. This allows us to dynamically adjust perturbation strengths and adversarial training weights. Stronger perturbations are applied to feature values with fewer combination varieties to improve generalization, while higher weights for low-frequency features address training imbalances. We leverage the Adaptive Adversarial perturbation based on the widely-applied Factorization Machine (AAFM) as our backbone model. In experiments, AAFM surpasses strong baselines in both fairness and accuracy measures. AAFM excels in providing item- and user-fairness for single- and multi-feature tasks, showcasing their versatility and scalability. To maintain good accuracy, we find that adversarial perturbation must be well-managed: during training, perturbations should not overly persist and their strengths should decay.
|
1906.07601
|
Yannick Est\`eve
|
Antoine Caubri\`ere, Natalia Tomashenko, Antoine Laurent, Emmanuel
Morin, Nathalie Camelin, Yannick Est\`eve
|
Curriculum-based transfer learning for an effective end-to-end spoken
language understanding and domain portability
|
Accepted to the INTERSPEECH 2019 conference. Submitted on March 29,
2019 (Paper submission deadline)
| null | null | null |
cs.CL cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present an end-to-end approach to extract semantic concepts directly from
the speech audio signal. To overcome the lack of data available for this spoken
language understanding approach, we investigate the use of a transfer learning
strategy based on the principles of curriculum learning. This approach allows
us to exploit out-of-domain data that can help to prepare a fully neural
architecture. Experiments are carried out on the French MEDIA and PORTMEDIA
corpora and show that this end-to-end SLU approach reaches the best results
ever published on this task. We compare our approach to a classical pipeline
approach that uses ASR, POS tagging, lemmatizer, chunker... and other NLP tools
that aim to enrich ASR outputs that feed an SLU text to concepts system. Last,
we explore the promising capacity of our end-to-end SLU approach to address the
problem of domain portability.
|
[
{
"created": "Tue, 18 Jun 2019 14:19:52 GMT",
"version": "v1"
}
] |
2019-06-19
|
[
[
"Caubrière",
"Antoine",
""
],
[
"Tomashenko",
"Natalia",
""
],
[
"Laurent",
"Antoine",
""
],
[
"Morin",
"Emmanuel",
""
],
[
"Camelin",
"Nathalie",
""
],
[
"Estève",
"Yannick",
""
]
] |
We present an end-to-end approach to extract semantic concepts directly from the speech audio signal. To overcome the lack of data available for this spoken language understanding approach, we investigate the use of a transfer learning strategy based on the principles of curriculum learning. This approach allows us to exploit out-of-domain data that can help to prepare a fully neural architecture. Experiments are carried out on the French MEDIA and PORTMEDIA corpora and show that this end-to-end SLU approach reaches the best results ever published on this task. We compare our approach to a classical pipeline approach that uses ASR, POS tagging, lemmatizer, chunker... and other NLP tools that aim to enrich ASR outputs that feed an SLU text to concepts system. Last, we explore the promising capacity of our end-to-end SLU approach to address the problem of domain portability.
|
2305.06985
|
Alexander Fengler
|
Alexander Fengler, Alejandro Lancho, Krishna Narayanan, and Yury
Polyanskiy
|
On the Advantages of Asynchrony in the Unsourced MAC
|
Accepted for presentation at IEEE ISIT 2023
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work we demonstrate how a lack of synchronization can in fact be
advantageous in the problem of random access. Specifically, we consider a
multiple-access problem over a frame-asynchronous 2-user binary-input adder
channel in the unsourced setup (2-UBAC). Previous work has shown that under
perfect synchronization the per-user rates achievable with linear codes over
the 2-UBAC are limited by 0.5 bit per channel use (compared to the capacity of
0.75). In this paper, we first demonstrate that arbitrary small (even
single-bit) shift between the user's frames enables (random) linear codes to
attain full capacity of 0.75 bit/user. Furthermore, we derive density evolution
equations for irregular LDPC codes, and prove (via concentration arguments)
that they correctly track the asymptotic bit-error rate of a BP decoder.
Optimizing the degree distributions we construct LDPC codes achieving per-user
rates of 0.73 bit per channel use.
|
[
{
"created": "Thu, 11 May 2023 17:16:57 GMT",
"version": "v1"
}
] |
2023-05-12
|
[
[
"Fengler",
"Alexander",
""
],
[
"Lancho",
"Alejandro",
""
],
[
"Narayanan",
"Krishna",
""
],
[
"Polyanskiy",
"Yury",
""
]
] |
In this work we demonstrate how a lack of synchronization can in fact be advantageous in the problem of random access. Specifically, we consider a multiple-access problem over a frame-asynchronous 2-user binary-input adder channel in the unsourced setup (2-UBAC). Previous work has shown that under perfect synchronization the per-user rates achievable with linear codes over the 2-UBAC are limited by 0.5 bit per channel use (compared to the capacity of 0.75). In this paper, we first demonstrate that arbitrary small (even single-bit) shift between the user's frames enables (random) linear codes to attain full capacity of 0.75 bit/user. Furthermore, we derive density evolution equations for irregular LDPC codes, and prove (via concentration arguments) that they correctly track the asymptotic bit-error rate of a BP decoder. Optimizing the degree distributions we construct LDPC codes achieving per-user rates of 0.73 bit per channel use.
|
1703.00159
|
Yong Wang
|
Yong Wang
|
A Calculus for True Concurrency
|
31 pages, 1 figures. arXiv admin note: substantial text overlap with
arXiv:1611.09035
| null | null | null |
cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We design a calculus for true concurrency called CTC, including its syntax
and operational semantics. CTC has good properties modulo several kinds of
strongly truly concurrent bisimulations and weakly truly concurrent
bisimulations, such as monoid laws, static laws, new expansion law for strongly
truly concurrent bisimulations, $\tau$ laws for weakly truly concurrent
bisimulations, and full congruences for strongly and weakly truly concurrent
bisimulations, and also unique solution for recursion.
|
[
{
"created": "Wed, 1 Mar 2017 07:25:23 GMT",
"version": "v1"
},
{
"created": "Wed, 22 Apr 2020 11:55:29 GMT",
"version": "v2"
}
] |
2020-04-24
|
[
[
"Wang",
"Yong",
""
]
] |
We design a calculus for true concurrency called CTC, including its syntax and operational semantics. CTC has good properties modulo several kinds of strongly truly concurrent bisimulations and weakly truly concurrent bisimulations, such as monoid laws, static laws, new expansion law for strongly truly concurrent bisimulations, $\tau$ laws for weakly truly concurrent bisimulations, and full congruences for strongly and weakly truly concurrent bisimulations, and also unique solution for recursion.
|
2101.02527
|
Christian Hesch
|
Ustim Khristenko, Stefan Schu{\ss}, Melanie Kr\"uger, Felix Schmidt,
Barbara Wohlmuth, Christian Hesch
|
Multidimensional coupling: A variationally consistent approach to
fiber-reinforced materials
| null | null |
10.1016/j.cma.2021.113869
| null |
cs.CE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A novel mathematical model for fiber-reinforced materials is proposed. It is
based on a 1-dimensional beam model for the thin fiber structures, a flexible
and general 3-dimensional elasticity model for the matrix and an overlapping
domain decomposition approach. From a computational point of view, this is
motivated by the fact that matrix and fibers can easily meshed independently.
Our main interest is in fiber reinforce polymers where the Young's modulus are
quite different. Thus the modeling error from the overlapping approach is of no
significance. The coupling conditions acknowledge both, the forces and the
moments of the beam model and transfer them to the background material. A
suitable static condensation procedure is applied to remove the beam balance
equations. The condensed system then forms our starting point for a numerical
approximation in terms of isogeometric analysis. The choice of our discrete
basis functions of higher regularity is motivated by the fact, that as a result
of the static condensation, we obtain second gradient terms in fiber direction.
Eventually, a series of benchmark tests demonstrate the flexibility and
robustness of the proposed methodology. As a proof-of-concept, we show that our
new model is able to capture bending, torsion and shear dominated situations.
|
[
{
"created": "Thu, 7 Jan 2021 13:03:03 GMT",
"version": "v1"
},
{
"created": "Wed, 24 Feb 2021 14:29:01 GMT",
"version": "v2"
},
{
"created": "Sat, 10 Apr 2021 06:42:55 GMT",
"version": "v3"
}
] |
2021-05-12
|
[
[
"Khristenko",
"Ustim",
""
],
[
"Schuß",
"Stefan",
""
],
[
"Krüger",
"Melanie",
""
],
[
"Schmidt",
"Felix",
""
],
[
"Wohlmuth",
"Barbara",
""
],
[
"Hesch",
"Christian",
""
]
] |
A novel mathematical model for fiber-reinforced materials is proposed. It is based on a 1-dimensional beam model for the thin fiber structures, a flexible and general 3-dimensional elasticity model for the matrix and an overlapping domain decomposition approach. From a computational point of view, this is motivated by the fact that matrix and fibers can easily meshed independently. Our main interest is in fiber reinforce polymers where the Young's modulus are quite different. Thus the modeling error from the overlapping approach is of no significance. The coupling conditions acknowledge both, the forces and the moments of the beam model and transfer them to the background material. A suitable static condensation procedure is applied to remove the beam balance equations. The condensed system then forms our starting point for a numerical approximation in terms of isogeometric analysis. The choice of our discrete basis functions of higher regularity is motivated by the fact, that as a result of the static condensation, we obtain second gradient terms in fiber direction. Eventually, a series of benchmark tests demonstrate the flexibility and robustness of the proposed methodology. As a proof-of-concept, we show that our new model is able to capture bending, torsion and shear dominated situations.
|
1606.01178
|
Md. Reza
|
Md. Alimoor Reza and Jana Kosecka
|
Reinforcement Learning for Semantic Segmentation in Indoor Scenes
| null | null | null | null |
cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Future advancements in robot autonomy and sophistication of robotics tasks
rest on robust, efficient, and task-dependent semantic understanding of the
environment. Semantic segmentation is the problem of simultaneous segmentation
and categorization of a partition of sensory data. The majority of current
approaches tackle this using multi-class segmentation and labeling in a
Conditional Random Field (CRF) framework or by generating multiple object
hypotheses and combining them sequentially. In practical settings, the subset
of semantic labels that are needed depend on the task and particular scene and
labelling every single pixel is not always necessary. We pursue these
observations in developing a more modular and flexible approach to multi-class
parsing of RGBD data based on learning strategies for combining independent
binary object-vs-background segmentations in place of the usual monolithic
multi-label CRF approach. Parameters for the independent binary segmentation
models can be learned very efficiently, and the combination strategy---learned
using reinforcement learning---can be set independently and can vary over
different tasks and environments. Accuracy is comparable to state-of-art
methods on a subset of the NYU-V2 dataset of indoor scenes, while providing
additional flexibility and modularity.
|
[
{
"created": "Fri, 3 Jun 2016 16:35:58 GMT",
"version": "v1"
}
] |
2016-06-06
|
[
[
"Reza",
"Md. Alimoor",
""
],
[
"Kosecka",
"Jana",
""
]
] |
Future advancements in robot autonomy and sophistication of robotics tasks rest on robust, efficient, and task-dependent semantic understanding of the environment. Semantic segmentation is the problem of simultaneous segmentation and categorization of a partition of sensory data. The majority of current approaches tackle this using multi-class segmentation and labeling in a Conditional Random Field (CRF) framework or by generating multiple object hypotheses and combining them sequentially. In practical settings, the subset of semantic labels that are needed depend on the task and particular scene and labelling every single pixel is not always necessary. We pursue these observations in developing a more modular and flexible approach to multi-class parsing of RGBD data based on learning strategies for combining independent binary object-vs-background segmentations in place of the usual monolithic multi-label CRF approach. Parameters for the independent binary segmentation models can be learned very efficiently, and the combination strategy---learned using reinforcement learning---can be set independently and can vary over different tasks and environments. Accuracy is comparable to state-of-art methods on a subset of the NYU-V2 dataset of indoor scenes, while providing additional flexibility and modularity.
|
1905.09045
|
Lorenzo Cerrone
|
Lorenzo Cerrone, Alexander Zeilmann, Fred A. Hamprecht
|
End-to-End Learned Random Walker for Seeded Image Segmentation
| null | null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
We present an end-to-end learned algorithm for seeded segmentation. Our
method is based on the Random Walker algorithm, where we predict the edge
weights of the underlying graph using a convolutional neural network. This can
be interpreted as learning context-dependent diffusivities for a linear
diffusion process. Besides calculating the exact gradient for optimizing these
diffusivities, we also propose simplifications that sparsely sample the
gradient and still yield competitive results. The proposed method achieves the
currently best results on a seeded version of the CREMI neuron segmentation
challenge.
|
[
{
"created": "Wed, 22 May 2019 09:56:04 GMT",
"version": "v1"
}
] |
2019-05-23
|
[
[
"Cerrone",
"Lorenzo",
""
],
[
"Zeilmann",
"Alexander",
""
],
[
"Hamprecht",
"Fred A.",
""
]
] |
We present an end-to-end learned algorithm for seeded segmentation. Our method is based on the Random Walker algorithm, where we predict the edge weights of the underlying graph using a convolutional neural network. This can be interpreted as learning context-dependent diffusivities for a linear diffusion process. Besides calculating the exact gradient for optimizing these diffusivities, we also propose simplifications that sparsely sample the gradient and still yield competitive results. The proposed method achieves the currently best results on a seeded version of the CREMI neuron segmentation challenge.
|
2009.10679
|
Manish Bhattarai
|
Manish Bhattarai, Aura Rose Jensen-Curtis, Manel Mart\'iNez-Ram\'on
|
An embedded deep learning system for augmented reality in firefighting
applications
|
Accepted to ICMLA Special Session on Deep Learning
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Firefighting is a dynamic activity, in which numerous operations occur
simultaneously. Maintaining situational awareness (i.e., knowledge of current
conditions and activities at the scene) is critical to the accurate
decision-making necessary for the safe and successful navigation of a fire
environment by firefighters. Conversely, the disorientation caused by hazards
such as smoke and extreme heat can lead to injury or even fatality. This
research implements recent advancements in technology such as deep learning,
point cloud and thermal imaging, and augmented reality platforms to improve a
firefighter's situational awareness and scene navigation through improved
interpretation of that scene. We have designed and built a prototype embedded
system that can leverage data streamed from cameras built into a firefighter's
personal protective equipment (PPE) to capture thermal, RGB color, and depth
imagery and then deploy already developed deep learning models to analyze the
input data in real time. The embedded system analyzes and returns the processed
images via wireless streaming, where they can be viewed remotely and relayed
back to the firefighter using an augmented reality platform that visualizes the
results of the analyzed inputs and draws the firefighter's attention to objects
of interest, such as doors and windows otherwise invisible through smoke and
flames.
|
[
{
"created": "Tue, 22 Sep 2020 16:55:44 GMT",
"version": "v1"
}
] |
2021-07-23
|
[
[
"Bhattarai",
"Manish",
""
],
[
"Jensen-Curtis",
"Aura Rose",
""
],
[
"MartíNez-Ramón",
"Manel",
""
]
] |
Firefighting is a dynamic activity, in which numerous operations occur simultaneously. Maintaining situational awareness (i.e., knowledge of current conditions and activities at the scene) is critical to the accurate decision-making necessary for the safe and successful navigation of a fire environment by firefighters. Conversely, the disorientation caused by hazards such as smoke and extreme heat can lead to injury or even fatality. This research implements recent advancements in technology such as deep learning, point cloud and thermal imaging, and augmented reality platforms to improve a firefighter's situational awareness and scene navigation through improved interpretation of that scene. We have designed and built a prototype embedded system that can leverage data streamed from cameras built into a firefighter's personal protective equipment (PPE) to capture thermal, RGB color, and depth imagery and then deploy already developed deep learning models to analyze the input data in real time. The embedded system analyzes and returns the processed images via wireless streaming, where they can be viewed remotely and relayed back to the firefighter using an augmented reality platform that visualizes the results of the analyzed inputs and draws the firefighter's attention to objects of interest, such as doors and windows otherwise invisible through smoke and flames.
|
2006.10587
|
Yisroel Mirsky Dr.
|
Yisroel Mirsky, Tomer Golomb, Yuval Elovici
|
Lightweight Collaborative Anomaly Detection for the IoT using Blockchain
|
Preprint of accepted publication, June 2020: Journal of Parallel and
Distributed Computing, Elsevier, ISSN: 0743-7315
| null | null | null |
cs.CR cs.DC cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Due to their rapid growth and deployment, the Internet of things (IoT) have
become a central aspect of our daily lives. Unfortunately, IoT devices tend to
have many vulnerabilities which can be exploited by an attacker. Unsupervised
techniques, such as anomaly detection, can be used to secure these devices in a
plug-and-protect manner.
However, anomaly detection models must be trained for a long time in order to
capture all benign behaviors. Furthermore, the anomaly detection model is
vulnerable to adversarial attacks since, during the training phase, all
observations are assumed to be benign. In this paper, we propose (1) a novel
approach for anomaly detection and (2) a lightweight framework that utilizes
the blockchain to ensemble an anomaly detection model in a distributed
environment.
Blockchain framework incrementally updates a trusted anomaly detection model
via self-attestation and consensus among the IoT devices. We evaluate our
method on a distributed IoT simulation platform, which consists of 48 Raspberry
Pis. The simulation demonstrates how the approach can enhance the security of
each device and the security of the network as a whole.
|
[
{
"created": "Thu, 18 Jun 2020 14:50:08 GMT",
"version": "v1"
}
] |
2020-06-19
|
[
[
"Mirsky",
"Yisroel",
""
],
[
"Golomb",
"Tomer",
""
],
[
"Elovici",
"Yuval",
""
]
] |
Due to their rapid growth and deployment, the Internet of things (IoT) have become a central aspect of our daily lives. Unfortunately, IoT devices tend to have many vulnerabilities which can be exploited by an attacker. Unsupervised techniques, such as anomaly detection, can be used to secure these devices in a plug-and-protect manner. However, anomaly detection models must be trained for a long time in order to capture all benign behaviors. Furthermore, the anomaly detection model is vulnerable to adversarial attacks since, during the training phase, all observations are assumed to be benign. In this paper, we propose (1) a novel approach for anomaly detection and (2) a lightweight framework that utilizes the blockchain to ensemble an anomaly detection model in a distributed environment. Blockchain framework incrementally updates a trusted anomaly detection model via self-attestation and consensus among the IoT devices. We evaluate our method on a distributed IoT simulation platform, which consists of 48 Raspberry Pis. The simulation demonstrates how the approach can enhance the security of each device and the security of the network as a whole.
|
1908.08774
|
Yikun Ban
|
Yikun Ban, Yuchen Zhou, Xu Cheng, Jiangfang Yi
|
Coalesced TLB to Exploit Diverse Contiguity of Memory Mapping
| null | null | null | null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The miss rate of TLB is crucial to the performance of address translation for
virtual memory. To reduce the TLB misses, improving translation coverage of TLB
has been an primary approach. Many previous works focus on coalescing multiple
contiguously mapped pages of the memory mapping into a modified entry, which
function well if the assumed contiguity of memory mapping is given.
Unfortunately, scenarios of applications are complicated and the produced
contiguity diversify. To gain better performance of translation, in this paper,
we first introduce a complex but prevalent type of contiguity, mixed
contiguity. Then we propose a HW-SW hybrid coalesced TLB structure which works
well on all observed types of contiguity including this type. In our
evaluation, the proposed scheme, K-bit Aligned TLB, outperforms the
state-of-the-art work by reducing at lease 27% TLB misses on average over it
using 16 benchmarks.
|
[
{
"created": "Thu, 22 Aug 2019 11:34:43 GMT",
"version": "v1"
},
{
"created": "Mon, 2 Dec 2019 22:21:59 GMT",
"version": "v2"
}
] |
2019-12-04
|
[
[
"Ban",
"Yikun",
""
],
[
"Zhou",
"Yuchen",
""
],
[
"Cheng",
"Xu",
""
],
[
"Yi",
"Jiangfang",
""
]
] |
The miss rate of TLB is crucial to the performance of address translation for virtual memory. To reduce the TLB misses, improving translation coverage of TLB has been an primary approach. Many previous works focus on coalescing multiple contiguously mapped pages of the memory mapping into a modified entry, which function well if the assumed contiguity of memory mapping is given. Unfortunately, scenarios of applications are complicated and the produced contiguity diversify. To gain better performance of translation, in this paper, we first introduce a complex but prevalent type of contiguity, mixed contiguity. Then we propose a HW-SW hybrid coalesced TLB structure which works well on all observed types of contiguity including this type. In our evaluation, the proposed scheme, K-bit Aligned TLB, outperforms the state-of-the-art work by reducing at lease 27% TLB misses on average over it using 16 benchmarks.
|
2311.08383
|
Priyanka Kaswan
|
Priyanka Kaswan and Sennur Ulukus
|
Choosing Outdated Information to Achieve Reliability in Age-Based
Gossiping
| null | null | null | null |
cs.IT cs.NI eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We consider a system model with two sources, a reliable source and an
unreliable source, who are responsible for disseminating updates regarding a
process to an age-based gossip network of $n$ nodes. Nodes wish to have fresh
information, however, they have preference for packets that originated at the
reliable source and are willing to sacrifice their version age of information
by up to $G$ versions to switch from an unreliable packet to a reliable packet.
We study how this protocol impacts the prevalence of unreliable packets at
nodes in the network and their version age. Using a stochastic hybrid system
(SHS) framework, we formulate analytical equations to characterize two
quantities: expected fraction of nodes with unreliable packets and expected
version age of information at network nodes. We show that as $G$ increases,
fewer nodes have unreliable packet, however, their version age increases as
well, thereby inducing a freshness-reliability trade-off in the network. We
present numerical results to support our findings.
|
[
{
"created": "Tue, 14 Nov 2023 18:45:29 GMT",
"version": "v1"
}
] |
2023-11-15
|
[
[
"Kaswan",
"Priyanka",
""
],
[
"Ulukus",
"Sennur",
""
]
] |
We consider a system model with two sources, a reliable source and an unreliable source, who are responsible for disseminating updates regarding a process to an age-based gossip network of $n$ nodes. Nodes wish to have fresh information, however, they have preference for packets that originated at the reliable source and are willing to sacrifice their version age of information by up to $G$ versions to switch from an unreliable packet to a reliable packet. We study how this protocol impacts the prevalence of unreliable packets at nodes in the network and their version age. Using a stochastic hybrid system (SHS) framework, we formulate analytical equations to characterize two quantities: expected fraction of nodes with unreliable packets and expected version age of information at network nodes. We show that as $G$ increases, fewer nodes have unreliable packet, however, their version age increases as well, thereby inducing a freshness-reliability trade-off in the network. We present numerical results to support our findings.
|
2110.13052
|
Noah Golowich
|
Noah Golowich, Ankur Moitra
|
Can Q-Learning be Improved with Advice?
| null | null | null | null |
cs.LG cs.AI cs.DS math.OC stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Despite rapid progress in theoretical reinforcement learning (RL) over the
last few years, most of the known guarantees are worst-case in nature, failing
to take advantage of structure that may be known a priori about a given RL
problem at hand. In this paper we address the question of whether worst-case
lower bounds for regret in online learning of Markov decision processes (MDPs)
can be circumvented when information about the MDP, in the form of predictions
about its optimal $Q$-value function, is given to the algorithm. We show that
when the predictions about the optimal $Q$-value function satisfy a reasonably
weak condition we call distillation, then we can improve regret bounds by
replacing the set of state-action pairs with the set of state-action pairs on
which the predictions are grossly inaccurate. This improvement holds for both
uniform regret bounds and gap-based ones. Further, we are able to achieve this
property with an algorithm that achieves sublinear regret when given arbitrary
predictions (i.e., even those which are not a distillation). Our work extends a
recent line of work on algorithms with predictions, which has typically focused
on simple online problems such as caching and scheduling, to the more complex
and general problem of reinforcement learning.
|
[
{
"created": "Mon, 25 Oct 2021 15:44:20 GMT",
"version": "v1"
}
] |
2021-10-26
|
[
[
"Golowich",
"Noah",
""
],
[
"Moitra",
"Ankur",
""
]
] |
Despite rapid progress in theoretical reinforcement learning (RL) over the last few years, most of the known guarantees are worst-case in nature, failing to take advantage of structure that may be known a priori about a given RL problem at hand. In this paper we address the question of whether worst-case lower bounds for regret in online learning of Markov decision processes (MDPs) can be circumvented when information about the MDP, in the form of predictions about its optimal $Q$-value function, is given to the algorithm. We show that when the predictions about the optimal $Q$-value function satisfy a reasonably weak condition we call distillation, then we can improve regret bounds by replacing the set of state-action pairs with the set of state-action pairs on which the predictions are grossly inaccurate. This improvement holds for both uniform regret bounds and gap-based ones. Further, we are able to achieve this property with an algorithm that achieves sublinear regret when given arbitrary predictions (i.e., even those which are not a distillation). Our work extends a recent line of work on algorithms with predictions, which has typically focused on simple online problems such as caching and scheduling, to the more complex and general problem of reinforcement learning.
|
1709.00112
|
Swanand Kadhe
|
Swanand Kadhe, Brenden Garcia, Anoosheh Heidarzadeh, Salim El
Rouayheb, Alex Sprintson
|
Private Information Retrieval with Side Information
|
Shorter version of the paper is accepted in Allerton Conference 2017
| null | null | null |
cs.IT cs.CR math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study the problem of Private Information Retrieval (PIR) in the presence
of prior side information. The problem setup includes a database of $K$
independent messages possibly replicated on several servers, and a user that
needs to retrieve one of these messages. In addition, the user has some prior
side information in the form of a subset of $M$ messages, not containing the
desired message and unknown to the servers. This problem is motivated by
practical settings in which the user can obtain side information
opportunistically from other users or has previously downloaded some messages
using classical PIR schemes. The objective of the user is to retrieve the
required message without revealing its identity while minimizing the amount of
data downloaded from the servers.
We focus on achieving information-theoretic privacy in two scenarios: (i) the
user wants to protect jointly its demand and side information; (ii) the user
wants to protect only the information about its demand, but not the side
information. To highlight the role of side information, we focus first on the
case of a single server (single database). In the first scenario, we prove that
the minimum download cost is $K-M$ messages, and in the second scenario it is
$\lceil \frac{K}{M+1}\rceil$ messages, which should be compared to $K$
messages, the minimum download cost in the case of no side information. Then,
we extend some of our results to the case of the database replicated on
multiple servers. Our proof techniques relate PIR with side information to the
index coding problem. We leverage this connection to prove converse results, as
well as to design achievability schemes.
|
[
{
"created": "Fri, 1 Sep 2017 00:04:11 GMT",
"version": "v1"
}
] |
2017-09-04
|
[
[
"Kadhe",
"Swanand",
""
],
[
"Garcia",
"Brenden",
""
],
[
"Heidarzadeh",
"Anoosheh",
""
],
[
"Rouayheb",
"Salim El",
""
],
[
"Sprintson",
"Alex",
""
]
] |
We study the problem of Private Information Retrieval (PIR) in the presence of prior side information. The problem setup includes a database of $K$ independent messages possibly replicated on several servers, and a user that needs to retrieve one of these messages. In addition, the user has some prior side information in the form of a subset of $M$ messages, not containing the desired message and unknown to the servers. This problem is motivated by practical settings in which the user can obtain side information opportunistically from other users or has previously downloaded some messages using classical PIR schemes. The objective of the user is to retrieve the required message without revealing its identity while minimizing the amount of data downloaded from the servers. We focus on achieving information-theoretic privacy in two scenarios: (i) the user wants to protect jointly its demand and side information; (ii) the user wants to protect only the information about its demand, but not the side information. To highlight the role of side information, we focus first on the case of a single server (single database). In the first scenario, we prove that the minimum download cost is $K-M$ messages, and in the second scenario it is $\lceil \frac{K}{M+1}\rceil$ messages, which should be compared to $K$ messages, the minimum download cost in the case of no side information. Then, we extend some of our results to the case of the database replicated on multiple servers. Our proof techniques relate PIR with side information to the index coding problem. We leverage this connection to prove converse results, as well as to design achievability schemes.
|
1910.11791
|
Linchao Bao
|
Yajing Chen, Fanzi Wu, Zeyu Wang, Yibing Song, Yonggen Ling, Linchao
Bao
|
Self-supervised Learning of Detailed 3D Face Reconstruction
|
Accepted by IEEE Transactions on Image Processing (TIP)
| null |
10.1109/TIP.2020.3017347
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we present an end-to-end learning framework for detailed 3D
face reconstruction from a single image. Our approach uses a 3DMM-based coarse
model and a displacement map in UV-space to represent a 3D face. Unlike
previous work addressing the problem, our learning framework does not require
supervision of surrogate ground-truth 3D models computed with traditional
approaches. Instead, we utilize the input image itself as supervision during
learning. In the first stage, we combine a photometric loss and a facial
perceptual loss between the input face and the rendered face, to regress a
3DMM-based coarse model. In the second stage, both the input image and the
regressed texture of the coarse model are unwrapped into UV-space, and then
sent through an image-toimage translation network to predict a displacement map
in UVspace. The displacement map and the coarse model are used to render a
final detailed face, which again can be compared with the original input image
to serve as a photometric loss for the second stage. The advantage of learning
displacement map in UV-space is that face alignment can be explicitly done
during the unwrapping, thus facial details are easier to learn from large
amount of data. Extensive experiments demonstrate the superiority of the
proposed method over previous work.
|
[
{
"created": "Fri, 25 Oct 2019 15:16:20 GMT",
"version": "v1"
},
{
"created": "Wed, 2 Sep 2020 03:58:23 GMT",
"version": "v2"
}
] |
2020-09-03
|
[
[
"Chen",
"Yajing",
""
],
[
"Wu",
"Fanzi",
""
],
[
"Wang",
"Zeyu",
""
],
[
"Song",
"Yibing",
""
],
[
"Ling",
"Yonggen",
""
],
[
"Bao",
"Linchao",
""
]
] |
In this paper, we present an end-to-end learning framework for detailed 3D face reconstruction from a single image. Our approach uses a 3DMM-based coarse model and a displacement map in UV-space to represent a 3D face. Unlike previous work addressing the problem, our learning framework does not require supervision of surrogate ground-truth 3D models computed with traditional approaches. Instead, we utilize the input image itself as supervision during learning. In the first stage, we combine a photometric loss and a facial perceptual loss between the input face and the rendered face, to regress a 3DMM-based coarse model. In the second stage, both the input image and the regressed texture of the coarse model are unwrapped into UV-space, and then sent through an image-toimage translation network to predict a displacement map in UVspace. The displacement map and the coarse model are used to render a final detailed face, which again can be compared with the original input image to serve as a photometric loss for the second stage. The advantage of learning displacement map in UV-space is that face alignment can be explicitly done during the unwrapping, thus facial details are easier to learn from large amount of data. Extensive experiments demonstrate the superiority of the proposed method over previous work.
|
1804.06996
|
Gaurav Bharaj
|
Gaurav Bharaj, Danny Kaufman, Etienne Vouga, Hanspeter Pfister
|
Metamorphs: Bistable Planar Structures
| null | null | null | null |
cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Extreme deformation can drastically morph a structure from one structural
form into another. Programming such deformation properties into the structure
is often challenging and in many cases an impossible task. The morphed forms do
not hold and usually relapse to the original form, where the structure is in
its lowest energy state. For example, a stick, when bent, resists its bent form
and tends to go back to its initial straight form, where it holds the least
amount of potential energy.
In this project, we present a computational design method which can create
fabricable planar structure that can morph into two different bistable forms.
Once the user provides the initial desired forms, the method automatically
creates support structures (internal springs), such that, the structure can not
only morph, but also hold the respective forms under external force
application. We achieve this through an iterative nonlinear optimization
strategy for shaping the potential energy of the structure in the two forms
simultaneously. Our approach guarantees first and second-order stability with
respect to the potential energy of the bistable structure.
|
[
{
"created": "Thu, 19 Apr 2018 05:15:03 GMT",
"version": "v1"
}
] |
2018-04-20
|
[
[
"Bharaj",
"Gaurav",
""
],
[
"Kaufman",
"Danny",
""
],
[
"Vouga",
"Etienne",
""
],
[
"Pfister",
"Hanspeter",
""
]
] |
Extreme deformation can drastically morph a structure from one structural form into another. Programming such deformation properties into the structure is often challenging and in many cases an impossible task. The morphed forms do not hold and usually relapse to the original form, where the structure is in its lowest energy state. For example, a stick, when bent, resists its bent form and tends to go back to its initial straight form, where it holds the least amount of potential energy. In this project, we present a computational design method which can create fabricable planar structure that can morph into two different bistable forms. Once the user provides the initial desired forms, the method automatically creates support structures (internal springs), such that, the structure can not only morph, but also hold the respective forms under external force application. We achieve this through an iterative nonlinear optimization strategy for shaping the potential energy of the structure in the two forms simultaneously. Our approach guarantees first and second-order stability with respect to the potential energy of the bistable structure.
|
1510.01891
|
Adam Kurpisz
|
Adam Kurpisz, Samuli Lepp\"anen, Monaldo Mastrolilli
|
On the Hardest Problem Formulations for the 0/1 Lasserre Hierarchy
| null | null | null | null |
cs.CC cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Lasserre/Sum-of-Squares (SoS) hierarchy is a systematic procedure for
constructing a sequence of increasingly tight semidefinite relaxations. It is
known that the hierarchy converges to the 0/1 polytope in n levels and captures
the convex relaxations used in the best available approximation algorithms for
a wide variety of optimization problems.
In this paper we characterize the set of 0/1 integer linear problems and
unconstrained 0/1 polynomial optimization problems that can still have an
integrality gap at level n-1. These problems are the hardest for the Lasserre
hierarchy in this sense.
|
[
{
"created": "Wed, 7 Oct 2015 10:57:45 GMT",
"version": "v1"
}
] |
2015-10-08
|
[
[
"Kurpisz",
"Adam",
""
],
[
"Leppänen",
"Samuli",
""
],
[
"Mastrolilli",
"Monaldo",
""
]
] |
The Lasserre/Sum-of-Squares (SoS) hierarchy is a systematic procedure for constructing a sequence of increasingly tight semidefinite relaxations. It is known that the hierarchy converges to the 0/1 polytope in n levels and captures the convex relaxations used in the best available approximation algorithms for a wide variety of optimization problems. In this paper we characterize the set of 0/1 integer linear problems and unconstrained 0/1 polynomial optimization problems that can still have an integrality gap at level n-1. These problems are the hardest for the Lasserre hierarchy in this sense.
|
2203.06870
|
Cl\'ement Canonne
|
Jayadev Acharya and Cl\'ement L. Canonne and Ziteng Sun and Himanshu
Tyagi
|
The Role of Interactivity in Structured Estimation
| null | null | null | null |
cs.DS cs.DM cs.IT cs.LG math.IT math.ST stat.TH
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study high-dimensional sparse estimation under three natural constraints:
communication constraints, local privacy constraints, and linear measurements
(compressive sensing). Without sparsity assumptions, it has been established
that interactivity cannot improve the minimax rates of estimation under these
information constraints. The question of whether interactivity helps with
natural inference tasks has been a topic of active research. We settle this
question in the affirmative for the prototypical problems of high-dimensional
sparse mean estimation and compressive sensing, by demonstrating a gap between
interactive and noninteractive protocols. We further establish that the gap
increases when we have more structured sparsity: for block sparsity this gap
can be as large as polynomial in the dimensionality. Thus, the more structured
the sparsity is, the greater is the advantage of interaction. Proving the lower
bounds requires a careful breaking of a sum of correlated random variables into
independent components using Baranyai's theorem on decomposition of
hypergraphs, which might be of independent interest.
|
[
{
"created": "Mon, 14 Mar 2022 05:54:42 GMT",
"version": "v1"
}
] |
2022-03-15
|
[
[
"Acharya",
"Jayadev",
""
],
[
"Canonne",
"Clément L.",
""
],
[
"Sun",
"Ziteng",
""
],
[
"Tyagi",
"Himanshu",
""
]
] |
We study high-dimensional sparse estimation under three natural constraints: communication constraints, local privacy constraints, and linear measurements (compressive sensing). Without sparsity assumptions, it has been established that interactivity cannot improve the minimax rates of estimation under these information constraints. The question of whether interactivity helps with natural inference tasks has been a topic of active research. We settle this question in the affirmative for the prototypical problems of high-dimensional sparse mean estimation and compressive sensing, by demonstrating a gap between interactive and noninteractive protocols. We further establish that the gap increases when we have more structured sparsity: for block sparsity this gap can be as large as polynomial in the dimensionality. Thus, the more structured the sparsity is, the greater is the advantage of interaction. Proving the lower bounds requires a careful breaking of a sum of correlated random variables into independent components using Baranyai's theorem on decomposition of hypergraphs, which might be of independent interest.
|
1903.00553
|
Binghui Wang
|
Binghui Wang, Neil Zhenqiang Gong
|
Attacking Graph-based Classification via Manipulating the Graph
Structure
|
To appear in The 26th ACM Conference on Computer and Communications
Security, Nov 2019
| null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Graph-based classification methods are widely used for security and privacy
analytics. Roughly speaking, graph-based classification methods include
collective classification and graph neural network. Evading a graph-based
classification method enables an attacker to evade detection in security
analytics and can be used as a privacy defense against inference attacks.
Existing adversarial machine learning studies mainly focused on machine
learning for non-graph data. Only a few recent studies touched adversarial
graph-based classification methods. However, they focused on graph neural
network methods, leaving adversarial collective classification largely
unexplored. We aim to bridge this gap in this work. We first propose a threat
model to characterize the attack surface of a collective classification method.
Specifically, we characterize an attacker's background knowledge along three
dimensions: parameters of the method, training dataset, and the complete graph;
an attacker's goal is to evade detection via manipulating the graph structure.
We formulate our attack as a graph-based optimization problem, solving which
produces the edges that an attacker needs to manipulate to achieve its attack
goal. Moreover, we propose several approximation techniques to solve the
optimization problem. We evaluate our attacks and compare them with a recent
attack designed for graph neural networks. Results show that our attacks 1) can
effectively evade graph-based classification methods; 2) do not require access
to the true parameters, true training dataset, and/or complete graph; and 3)
outperform the existing attack for evading collective classification methods
and some graph neural network methods. We also apply our attacks to evade Sybil
detection using a large-scale Twitter dataset and apply our attacks as a
defense against attribute inference attacks using a large-scale Google+
dataset.
|
[
{
"created": "Fri, 1 Mar 2019 21:59:17 GMT",
"version": "v1"
},
{
"created": "Tue, 13 Aug 2019 02:00:12 GMT",
"version": "v2"
}
] |
2019-08-14
|
[
[
"Wang",
"Binghui",
""
],
[
"Gong",
"Neil Zhenqiang",
""
]
] |
Graph-based classification methods are widely used for security and privacy analytics. Roughly speaking, graph-based classification methods include collective classification and graph neural network. Evading a graph-based classification method enables an attacker to evade detection in security analytics and can be used as a privacy defense against inference attacks. Existing adversarial machine learning studies mainly focused on machine learning for non-graph data. Only a few recent studies touched adversarial graph-based classification methods. However, they focused on graph neural network methods, leaving adversarial collective classification largely unexplored. We aim to bridge this gap in this work. We first propose a threat model to characterize the attack surface of a collective classification method. Specifically, we characterize an attacker's background knowledge along three dimensions: parameters of the method, training dataset, and the complete graph; an attacker's goal is to evade detection via manipulating the graph structure. We formulate our attack as a graph-based optimization problem, solving which produces the edges that an attacker needs to manipulate to achieve its attack goal. Moreover, we propose several approximation techniques to solve the optimization problem. We evaluate our attacks and compare them with a recent attack designed for graph neural networks. Results show that our attacks 1) can effectively evade graph-based classification methods; 2) do not require access to the true parameters, true training dataset, and/or complete graph; and 3) outperform the existing attack for evading collective classification methods and some graph neural network methods. We also apply our attacks to evade Sybil detection using a large-scale Twitter dataset and apply our attacks as a defense against attribute inference attacks using a large-scale Google+ dataset.
|
2308.03312
|
Kexin Pei
|
Kexin Pei, Weichen Li, Qirui Jin, Shuyang Liu, Scott Geng, Lorenzo
Cavallaro, Junfeng Yang, Suman Jana
|
Exploiting Code Symmetries for Learning Program Semantics
| null | null | null | null |
cs.LG cs.CR cs.PL
|
http://creativecommons.org/licenses/by/4.0/
|
This paper tackles the challenge of teaching code semantics to Large Language
Models (LLMs) for program analysis by incorporating code symmetries into the
model architecture. We introduce a group-theoretic framework that defines code
symmetries as semantics-preserving transformations, where forming a code
symmetry group enables precise and efficient reasoning of code semantics. Our
solution, SymC, develops a novel variant of self-attention that is provably
equivariant to code symmetries from the permutation group defined over the
program dependence graph. SymC obtains superior performance on five program
analysis tasks, outperforming state-of-the-art code models without any
pre-training. Our results suggest that code LLMs that encode the code
structural prior via the code symmetry group generalize better and faster.
|
[
{
"created": "Mon, 7 Aug 2023 05:40:58 GMT",
"version": "v1"
},
{
"created": "Fri, 25 Aug 2023 16:08:51 GMT",
"version": "v2"
},
{
"created": "Mon, 28 Aug 2023 04:53:52 GMT",
"version": "v3"
},
{
"created": "Tue, 29 Aug 2023 01:44:39 GMT",
"version": "v4"
},
{
"created": "Thu, 31 Aug 2023 02:29:36 GMT",
"version": "v5"
},
{
"created": "Tue, 27 Feb 2024 21:18:17 GMT",
"version": "v6"
},
{
"created": "Thu, 29 Feb 2024 05:16:24 GMT",
"version": "v7"
},
{
"created": "Thu, 6 Jun 2024 16:35:20 GMT",
"version": "v8"
}
] |
2024-06-07
|
[
[
"Pei",
"Kexin",
""
],
[
"Li",
"Weichen",
""
],
[
"Jin",
"Qirui",
""
],
[
"Liu",
"Shuyang",
""
],
[
"Geng",
"Scott",
""
],
[
"Cavallaro",
"Lorenzo",
""
],
[
"Yang",
"Junfeng",
""
],
[
"Jana",
"Suman",
""
]
] |
This paper tackles the challenge of teaching code semantics to Large Language Models (LLMs) for program analysis by incorporating code symmetries into the model architecture. We introduce a group-theoretic framework that defines code symmetries as semantics-preserving transformations, where forming a code symmetry group enables precise and efficient reasoning of code semantics. Our solution, SymC, develops a novel variant of self-attention that is provably equivariant to code symmetries from the permutation group defined over the program dependence graph. SymC obtains superior performance on five program analysis tasks, outperforming state-of-the-art code models without any pre-training. Our results suggest that code LLMs that encode the code structural prior via the code symmetry group generalize better and faster.
|
2202.05329
|
Simon Lars\'en
|
Simon Lars\'en (1), Jean-R\'emy Falleri (2), Benoit Baudry (1), Martin
Monperrus (1) ((1) KTH Royal Institute of Technology, (2) Univ. Bordeaux,
Bordeaux INP, CNRS, LaBRI, IUF)
|
Spork: Structured Merge for Java with Formatting Preservation
|
21 pages, 18 figures, 11 tables, accepted for publication in IEEE
Transactions on Software Engineering
|
IEEE Transactions on Software Engineering, 2022
|
10.1109/TSE.2022.3143766
| null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The highly parallel workflows of modern software development have made
merging of source code a common activity for developers. The state of the
practice is based on line-based merge, which is ubiquitously used with "git
merge". Line-based merge is however a generalized technique for any text that
cannot leverage the structured nature of source code, making merge conflicts a
common occurrence. As a remedy, research has proposed structured merge tools,
which typically operate on abstract syntax trees instead of raw text.
Structured merging greatly reduces the prevalence of merge conflicts but
suffers from important limitations, the main ones being a tendency to alter the
formatting of the merged code and being prone to excessive running times. In
this paper, we present SPORK, a novel structured merge tool for JAVA. SPORK is
unique as it preserves formatting to a significantly greater degree than
comparable state-of-the-art tools. SPORK is also overall faster than the state
of the art, in particular significantly reducing worst-case running times in
practice. We demonstrate these properties by replaying 1740 real-world file
merges collected from 119 open-source projects, and further demonstrate several
key differences between SPORK and the state of the art with in-depth case
studies.
|
[
{
"created": "Thu, 10 Feb 2022 21:15:49 GMT",
"version": "v1"
}
] |
2022-02-21
|
[
[
"Larsén",
"Simon",
""
],
[
"Falleri",
"Jean-Rémy",
""
],
[
"Baudry",
"Benoit",
""
],
[
"Monperrus",
"Martin",
""
]
] |
The highly parallel workflows of modern software development have made merging of source code a common activity for developers. The state of the practice is based on line-based merge, which is ubiquitously used with "git merge". Line-based merge is however a generalized technique for any text that cannot leverage the structured nature of source code, making merge conflicts a common occurrence. As a remedy, research has proposed structured merge tools, which typically operate on abstract syntax trees instead of raw text. Structured merging greatly reduces the prevalence of merge conflicts but suffers from important limitations, the main ones being a tendency to alter the formatting of the merged code and being prone to excessive running times. In this paper, we present SPORK, a novel structured merge tool for JAVA. SPORK is unique as it preserves formatting to a significantly greater degree than comparable state-of-the-art tools. SPORK is also overall faster than the state of the art, in particular significantly reducing worst-case running times in practice. We demonstrate these properties by replaying 1740 real-world file merges collected from 119 open-source projects, and further demonstrate several key differences between SPORK and the state of the art with in-depth case studies.
|
1501.04985
|
Konstantinos Georgiou
|
Jurek Czyzowicz, Konstantinos Georgiou, Evangelos Kranakis, Lata
Narayanan, Jarda Opatrny, Birgit Vogtenhuber
|
Evacuating Robots from a Disk Using Face-to-Face Communication
|
22 pages, 8 figures. An extended abstract of this work was accepted
for publication in the LNCS proceedings of the 9th International Conference
on Algorithms and Complexity (CIAC15)
|
Discrete Mathematics & Theoretical Computer Science, vol. 22 no.
4, Distributed Computing and Networking (August 27, 2020) dmtcs:6198
|
10.23638/DMTCS-22-4-4
| null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Assume that two robots are located at the centre of a unit disk. Their goal
is to evacuate from the disk through an exit at an unknown location on the
boundary of the disk. At any time the robots can move anywhere they choose on
the disk, independently of each other, with maximum speed $1$. The robots can
cooperate by exchanging information whenever they meet. We study algorithms for
the two robots to minimize the evacuation time: the time when both robots reach
the exit.
In [CGGKMP14] the authors gave an algorithm defining trajectories for the two
robots yielding evacuation time at most $5.740$ and also proved that any
algorithm has evacuation time at least $3+ \frac{\pi}{4} + \sqrt{2} \approx
5.199$. We improve both the upper and lower bound on the evacuation time of a
unit disk. Namely, we present a new non-trivial algorithm whose evacuation time
is at most $5.628$ and show that any algorithm has evacuation time at least $3+
\frac{\pi}{6} + \sqrt{3} \approx 5.255$. To achieve the upper bound, we
designed an algorithm which proposes a forced meeting between the two robots,
even if the exit has not been found by either of them. We also show that such a
strategy is provably optimal for a related problem of searching for an exit
placed at the vertices of a regular hexagon.
|
[
{
"created": "Tue, 20 Jan 2015 21:36:08 GMT",
"version": "v1"
},
{
"created": "Thu, 12 Mar 2020 02:07:34 GMT",
"version": "v2"
},
{
"created": "Tue, 9 Jun 2020 02:12:31 GMT",
"version": "v3"
},
{
"created": "Tue, 21 Jul 2020 04:08:18 GMT",
"version": "v4"
},
{
"created": "Mon, 24 Aug 2020 04:13:48 GMT",
"version": "v5"
}
] |
2023-06-22
|
[
[
"Czyzowicz",
"Jurek",
""
],
[
"Georgiou",
"Konstantinos",
""
],
[
"Kranakis",
"Evangelos",
""
],
[
"Narayanan",
"Lata",
""
],
[
"Opatrny",
"Jarda",
""
],
[
"Vogtenhuber",
"Birgit",
""
]
] |
Assume that two robots are located at the centre of a unit disk. Their goal is to evacuate from the disk through an exit at an unknown location on the boundary of the disk. At any time the robots can move anywhere they choose on the disk, independently of each other, with maximum speed $1$. The robots can cooperate by exchanging information whenever they meet. We study algorithms for the two robots to minimize the evacuation time: the time when both robots reach the exit. In [CGGKMP14] the authors gave an algorithm defining trajectories for the two robots yielding evacuation time at most $5.740$ and also proved that any algorithm has evacuation time at least $3+ \frac{\pi}{4} + \sqrt{2} \approx 5.199$. We improve both the upper and lower bound on the evacuation time of a unit disk. Namely, we present a new non-trivial algorithm whose evacuation time is at most $5.628$ and show that any algorithm has evacuation time at least $3+ \frac{\pi}{6} + \sqrt{3} \approx 5.255$. To achieve the upper bound, we designed an algorithm which proposes a forced meeting between the two robots, even if the exit has not been found by either of them. We also show that such a strategy is provably optimal for a related problem of searching for an exit placed at the vertices of a regular hexagon.
|
2310.11450
|
Thomas Decker
|
Thomas Decker, Michael Lebacher and Volker Tresp
|
Explaining Deep Neural Networks for Bearing Fault Detection with
Vibration Concepts
|
2023 IEEE 21st International Conference on Industrial Informatics
(INDIN)
| null |
10.1109/INDIN51400.2023.10218170
| null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Concept-based explanation methods, such as Concept Activation Vectors, are
potent means to quantify how abstract or high-level characteristics of input
data influence the predictions of complex deep neural networks. However,
applying them to industrial prediction problems is challenging as it is not
immediately clear how to define and access appropriate concepts for individual
use cases and specific data types. In this work, we investigate how to leverage
established concept-based explanation techniques in the context of bearing
fault detection with deep neural networks trained on vibration signals. Since
bearings are prevalent in almost every rotating equipment, ensuring the
reliability of intransparent fault detection models is crucial to prevent
costly repairs and downtimes of industrial machinery. Our evaluations
demonstrate that explaining opaque models in terms of vibration concepts
enables human-comprehensible and intuitive insights about their inner workings,
but the underlying assumptions need to be carefully validated first.
|
[
{
"created": "Tue, 17 Oct 2023 17:58:19 GMT",
"version": "v1"
}
] |
2023-10-18
|
[
[
"Decker",
"Thomas",
""
],
[
"Lebacher",
"Michael",
""
],
[
"Tresp",
"Volker",
""
]
] |
Concept-based explanation methods, such as Concept Activation Vectors, are potent means to quantify how abstract or high-level characteristics of input data influence the predictions of complex deep neural networks. However, applying them to industrial prediction problems is challenging as it is not immediately clear how to define and access appropriate concepts for individual use cases and specific data types. In this work, we investigate how to leverage established concept-based explanation techniques in the context of bearing fault detection with deep neural networks trained on vibration signals. Since bearings are prevalent in almost every rotating equipment, ensuring the reliability of intransparent fault detection models is crucial to prevent costly repairs and downtimes of industrial machinery. Our evaluations demonstrate that explaining opaque models in terms of vibration concepts enables human-comprehensible and intuitive insights about their inner workings, but the underlying assumptions need to be carefully validated first.
|
1403.3286
|
Sadegh Esmaeil Zadeh Soudjani
|
S. Esmaeil Zadeh Soudjani, C. Gevaerts, A. Abate
|
FAUST$^2$: Formal Abstractions of Uncountable-STate STochastic processes
|
This paper is submitted to the 26th International Conference on
Computer Aided Verification (CAV 2014)
| null | null | null |
cs.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
FAUST$^2$ is a software tool that generates formal abstractions of (possibly
non-deterministic) discrete-time Markov processes (dtMP) defined over
uncountable (continuous) state spaces. A dtMP model is specified in MATLAB and
abstracted as a finite-state Markov chain or Markov decision processes. The
abstraction procedure runs in MATLAB and employs parallel computations and fast
manipulations based on vector calculus. The abstract model is formally put in
relationship with the concrete dtMP via a user-defined maximum threshold on the
approximation error introduced by the abstraction procedure. FAUST$^2$ allows
exporting the abstract model to well-known probabilistic model checkers, such
as PRISM or MRMC. Alternatively, it can handle internally the computation of
PCTL properties (e.g. safety or reach-avoid) over the abstract model, and
refine the outcomes over the concrete dtMP via a quantified error that depends
on the abstraction procedure and the given formula. The toolbox is available at
http://sourceforge.net/projects/faust2/
|
[
{
"created": "Thu, 13 Mar 2014 14:53:46 GMT",
"version": "v1"
}
] |
2014-03-14
|
[
[
"Soudjani",
"S. Esmaeil Zadeh",
""
],
[
"Gevaerts",
"C.",
""
],
[
"Abate",
"A.",
""
]
] |
FAUST$^2$ is a software tool that generates formal abstractions of (possibly non-deterministic) discrete-time Markov processes (dtMP) defined over uncountable (continuous) state spaces. A dtMP model is specified in MATLAB and abstracted as a finite-state Markov chain or Markov decision processes. The abstraction procedure runs in MATLAB and employs parallel computations and fast manipulations based on vector calculus. The abstract model is formally put in relationship with the concrete dtMP via a user-defined maximum threshold on the approximation error introduced by the abstraction procedure. FAUST$^2$ allows exporting the abstract model to well-known probabilistic model checkers, such as PRISM or MRMC. Alternatively, it can handle internally the computation of PCTL properties (e.g. safety or reach-avoid) over the abstract model, and refine the outcomes over the concrete dtMP via a quantified error that depends on the abstraction procedure and the given formula. The toolbox is available at http://sourceforge.net/projects/faust2/
|
2210.16046
|
Masakazu Yoshimura
|
Masakazu Yoshimura, Junji Otsuka, Atsushi Irie, Takeshi Ohashi
|
Rawgment: Noise-Accounted RAW Augmentation Enables Recognition in a Wide
Variety of Environments
|
Accepted to CVPR2023
| null | null | null |
cs.CV eess.IV
|
http://creativecommons.org/licenses/by/4.0/
|
Image recognition models that work in challenging environments (e.g.,
extremely dark, blurry, or high dynamic range conditions) must be useful.
However, creating training datasets for such environments is expensive and hard
due to the difficulties of data collection and annotation. It is desirable if
we could get a robust model without the need for hard-to-obtain datasets. One
simple approach is to apply data augmentation such as color jitter and blur to
standard RGB (sRGB) images in simple scenes. Unfortunately, this approach
struggles to yield realistic images in terms of pixel intensity and noise
distribution due to not considering the non-linearity of Image Signal
Processors (ISPs) and noise characteristics of image sensors. Instead, we
propose a noise-accounted RAW image augmentation method. In essence, color
jitter and blur augmentation are applied to a RAW image before applying
non-linear ISP, resulting in realistic intensity. Furthermore, we introduce a
noise amount alignment method that calibrates the domain gap in the noise
property caused by the augmentation. We show that our proposed noise-accounted
RAW augmentation method doubles the image recognition accuracy in challenging
environments only with simple training data.
|
[
{
"created": "Fri, 28 Oct 2022 10:33:45 GMT",
"version": "v1"
},
{
"created": "Mon, 27 Mar 2023 06:17:13 GMT",
"version": "v2"
}
] |
2023-03-28
|
[
[
"Yoshimura",
"Masakazu",
""
],
[
"Otsuka",
"Junji",
""
],
[
"Irie",
"Atsushi",
""
],
[
"Ohashi",
"Takeshi",
""
]
] |
Image recognition models that work in challenging environments (e.g., extremely dark, blurry, or high dynamic range conditions) must be useful. However, creating training datasets for such environments is expensive and hard due to the difficulties of data collection and annotation. It is desirable if we could get a robust model without the need for hard-to-obtain datasets. One simple approach is to apply data augmentation such as color jitter and blur to standard RGB (sRGB) images in simple scenes. Unfortunately, this approach struggles to yield realistic images in terms of pixel intensity and noise distribution due to not considering the non-linearity of Image Signal Processors (ISPs) and noise characteristics of image sensors. Instead, we propose a noise-accounted RAW image augmentation method. In essence, color jitter and blur augmentation are applied to a RAW image before applying non-linear ISP, resulting in realistic intensity. Furthermore, we introduce a noise amount alignment method that calibrates the domain gap in the noise property caused by the augmentation. We show that our proposed noise-accounted RAW augmentation method doubles the image recognition accuracy in challenging environments only with simple training data.
|
1305.2755
|
Issam Sahmoudi issam sahmoudi
|
Issam Sahmoudi and Abdelmonaime Lachkar
|
Clustering Web Search Results For Effective Arabic Language Browsing
| null |
International Journal on Natural Language Computing (IJNLC) Vol.
2, No.2, April 2013
|
10.5121/ijnlc.2013.2202
| null |
cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The process of browsing Search Results is one of the major problems with
traditional Web search engines for English, European, and any other languages
generally, and for Arabic Language particularly. This process is absolutely
time consuming and the browsing style seems to be unattractive. Organizing Web
search results into clusters facilitates users quick browsing through search
results. Traditional clustering techniques (data-centric clustering algorithms)
are inadequate since they don't generate clusters with highly readable names or
cluster labels. To solve this problem, Description-centric algorithms such as
Suffix Tree Clustering (STC) algorithm have been introduced and used
successfully and extensively with different adapted versions for English,
European, and Chinese Languages. However, till the day of writing this paper,
in our knowledge, STC algorithm has been never applied for Arabic Web Snippets
Search Results Clustering.In this paper, we propose first, to study how STC can
be applied for Arabic Language? We then illustrate by example that is
impossible to apply STC after Arabic Snippets pre-processing (stem or root
extraction) because the Merging process yields many redundant clusters.
Secondly, to overcome this problem, we propose to integrate STC in a new scheme
taking into a count the Arabic language properties in order to get the web more
and more adapted to Arabic users. The proposed approach automatically clusters
the web search results into high quality, and high significant clusters labels.
The obtained clusters not only are coherent, but also can convey the contents
to the users concisely and accurately. Therefore the Arabic users can decide at
a glance whether the contents of a cluster are of interest.....
|
[
{
"created": "Mon, 13 May 2013 12:28:34 GMT",
"version": "v1"
}
] |
2013-05-14
|
[
[
"Sahmoudi",
"Issam",
""
],
[
"Lachkar",
"Abdelmonaime",
""
]
] |
The process of browsing Search Results is one of the major problems with traditional Web search engines for English, European, and any other languages generally, and for Arabic Language particularly. This process is absolutely time consuming and the browsing style seems to be unattractive. Organizing Web search results into clusters facilitates users quick browsing through search results. Traditional clustering techniques (data-centric clustering algorithms) are inadequate since they don't generate clusters with highly readable names or cluster labels. To solve this problem, Description-centric algorithms such as Suffix Tree Clustering (STC) algorithm have been introduced and used successfully and extensively with different adapted versions for English, European, and Chinese Languages. However, till the day of writing this paper, in our knowledge, STC algorithm has been never applied for Arabic Web Snippets Search Results Clustering.In this paper, we propose first, to study how STC can be applied for Arabic Language? We then illustrate by example that is impossible to apply STC after Arabic Snippets pre-processing (stem or root extraction) because the Merging process yields many redundant clusters. Secondly, to overcome this problem, we propose to integrate STC in a new scheme taking into a count the Arabic language properties in order to get the web more and more adapted to Arabic users. The proposed approach automatically clusters the web search results into high quality, and high significant clusters labels. The obtained clusters not only are coherent, but also can convey the contents to the users concisely and accurately. Therefore the Arabic users can decide at a glance whether the contents of a cluster are of interest.....
|
1802.07983
|
Miroslav Bures
|
Miroslav Bures and Karel Frajtak and Bestoun S. Ahmed
|
Tapir: Automation Support of Exploratory Testing Using Model
Reconstruction of the System Under Test
| null | null |
10.1109/TR.2018.2799957
| null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
For a considerable number of software projects, the creation of effective
test cases is hindered by design documentation that is either lacking,
incomplete or obsolete. The exploratory testing approach can serve as a sound
method in such situations. However, the efficiency of this testing approach
strongly depends on the method, the documentation of explored parts of a
system, the organization and distribution of work among individual testers on a
team, and the minimization of potential (very probable) duplicities in
performed tests. In this paper, we present a framework for replacing and
automating a portion of these tasks. A screen-flow-based model of the tested
system is incrementally reconstructed during the exploratory testing process by
tracking testers' activities. With additional metadata, the model serves for an
automated navigation process for a tester. Compared with the exploratory
testing approach, which is manually performed in two case studies, the proposed
framework allows the testers to explore a greater extent of the tested system
and enables greater detection of the defects present in the system. The results
show that the time efficiency of the testing process improved with framework
support. This efficiency can be increased by team-based navigational strategies
that are implemented within the proposed framework, which is documented by
another case study presented in this paper.
|
[
{
"created": "Thu, 22 Feb 2018 11:27:14 GMT",
"version": "v1"
}
] |
2019-12-05
|
[
[
"Bures",
"Miroslav",
""
],
[
"Frajtak",
"Karel",
""
],
[
"Ahmed",
"Bestoun S.",
""
]
] |
For a considerable number of software projects, the creation of effective test cases is hindered by design documentation that is either lacking, incomplete or obsolete. The exploratory testing approach can serve as a sound method in such situations. However, the efficiency of this testing approach strongly depends on the method, the documentation of explored parts of a system, the organization and distribution of work among individual testers on a team, and the minimization of potential (very probable) duplicities in performed tests. In this paper, we present a framework for replacing and automating a portion of these tasks. A screen-flow-based model of the tested system is incrementally reconstructed during the exploratory testing process by tracking testers' activities. With additional metadata, the model serves for an automated navigation process for a tester. Compared with the exploratory testing approach, which is manually performed in two case studies, the proposed framework allows the testers to explore a greater extent of the tested system and enables greater detection of the defects present in the system. The results show that the time efficiency of the testing process improved with framework support. This efficiency can be increased by team-based navigational strategies that are implemented within the proposed framework, which is documented by another case study presented in this paper.
|
0910.1123
|
Arvind Yedla
|
Arvind Yedla, Henry D. Pfister, Krishna R. Narayanan
|
Can Iterative Decoding for Erasure Correlated Sources be Universal?
|
8 pages, to appear in Allerton '09
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we consider a few iterative decoding schemes for the joint
source-channel coding of correlated sources. Specifically, we consider the
joint source-channel coding of two erasure correlated sources with transmission
over different erasure channels. Our main interest is in determining whether or
not various code ensembles can achieve the capacity region universally over
varying channel conditions. We consider two ensembles in the class of
low-density generator-matrix (LDGM) codes known as Luby-Transform (LT) codes
and one ensemble of low-density parity-check (LDPC) codes. We analyze them
using density evolution and show that optimized LT codes can achieve the
extremal symmetric point of the capacity region. We also show that LT codes are
not universal under iterative decoding for this problem because they cannot
simultaneously achieve the extremal symmetric point and a corner point of the
capacity region. The sub-universality of iterative decoding is characterized by
studying the density evolution for LT codes.
|
[
{
"created": "Tue, 6 Oct 2009 22:10:56 GMT",
"version": "v1"
}
] |
2009-10-08
|
[
[
"Yedla",
"Arvind",
""
],
[
"Pfister",
"Henry D.",
""
],
[
"Narayanan",
"Krishna R.",
""
]
] |
In this paper, we consider a few iterative decoding schemes for the joint source-channel coding of correlated sources. Specifically, we consider the joint source-channel coding of two erasure correlated sources with transmission over different erasure channels. Our main interest is in determining whether or not various code ensembles can achieve the capacity region universally over varying channel conditions. We consider two ensembles in the class of low-density generator-matrix (LDGM) codes known as Luby-Transform (LT) codes and one ensemble of low-density parity-check (LDPC) codes. We analyze them using density evolution and show that optimized LT codes can achieve the extremal symmetric point of the capacity region. We also show that LT codes are not universal under iterative decoding for this problem because they cannot simultaneously achieve the extremal symmetric point and a corner point of the capacity region. The sub-universality of iterative decoding is characterized by studying the density evolution for LT codes.
|
2206.00227
|
Junbo Zhang
|
Junbo Zhang, Kaisheng Ma
|
Rethinking the Augmentation Module in Contrastive Learning: Learning
Hierarchical Augmentation Invariance with Expanded Views
|
Accepted to CVPR 2022
|
2022 IEEE/CVF Conference on Computer Vision and Pattern
Recognition (CVPR)
| null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A data augmentation module is utilized in contrastive learning to transform
the given data example into two views, which is considered essential and
irreplaceable. However, the predetermined composition of multiple data
augmentations brings two drawbacks. First, the artificial choice of
augmentation types brings specific representational invariances to the model,
which have different degrees of positive and negative effects on different
downstream tasks. Treating each type of augmentation equally during training
makes the model learn non-optimal representations for various downstream tasks
and limits the flexibility to choose augmentation types beforehand. Second, the
strong data augmentations used in classic contrastive learning methods may
bring too much invariance in some cases, and fine-grained information that is
essential to some downstream tasks may be lost. This paper proposes a general
method to alleviate these two problems by considering where and what to
contrast in a general contrastive learning framework. We first propose to learn
different augmentation invariances at different depths of the model according
to the importance of each data augmentation instead of learning
representational invariances evenly in the backbone. We then propose to expand
the contrast content with augmentation embeddings to reduce the misleading
effects of strong data augmentations. Experiments based on several baseline
methods demonstrate that we learn better representations for various benchmarks
on classification, detection, and segmentation downstream tasks.
|
[
{
"created": "Wed, 1 Jun 2022 04:30:46 GMT",
"version": "v1"
},
{
"created": "Mon, 22 Aug 2022 03:28:52 GMT",
"version": "v2"
}
] |
2022-08-23
|
[
[
"Zhang",
"Junbo",
""
],
[
"Ma",
"Kaisheng",
""
]
] |
A data augmentation module is utilized in contrastive learning to transform the given data example into two views, which is considered essential and irreplaceable. However, the predetermined composition of multiple data augmentations brings two drawbacks. First, the artificial choice of augmentation types brings specific representational invariances to the model, which have different degrees of positive and negative effects on different downstream tasks. Treating each type of augmentation equally during training makes the model learn non-optimal representations for various downstream tasks and limits the flexibility to choose augmentation types beforehand. Second, the strong data augmentations used in classic contrastive learning methods may bring too much invariance in some cases, and fine-grained information that is essential to some downstream tasks may be lost. This paper proposes a general method to alleviate these two problems by considering where and what to contrast in a general contrastive learning framework. We first propose to learn different augmentation invariances at different depths of the model according to the importance of each data augmentation instead of learning representational invariances evenly in the backbone. We then propose to expand the contrast content with augmentation embeddings to reduce the misleading effects of strong data augmentations. Experiments based on several baseline methods demonstrate that we learn better representations for various benchmarks on classification, detection, and segmentation downstream tasks.
|
0706.3132
|
Paulo Condado
|
Paulo A. Condado and Fernando G. Lobo
|
EasyVoice: Integrating voice synthesis with Skype
| null | null | null | null |
cs.CY cs.HC
| null |
This paper presents EasyVoice, a system that integrates voice synthesis with
Skype. EasyVoice allows a person with voice disabilities to talk with another
person located anywhere in the world, removing an important obstacle that
affect these people during a phone or VoIP-based conversation.
|
[
{
"created": "Thu, 21 Jun 2007 12:04:40 GMT",
"version": "v1"
}
] |
2007-06-22
|
[
[
"Condado",
"Paulo A.",
""
],
[
"Lobo",
"Fernando G.",
""
]
] |
This paper presents EasyVoice, a system that integrates voice synthesis with Skype. EasyVoice allows a person with voice disabilities to talk with another person located anywhere in the world, removing an important obstacle that affect these people during a phone or VoIP-based conversation.
|
1512.03866
|
Fei Li
|
Qiuyan Wang, Fei Li and Dongdai Lin
|
A Class of Linear Codes With Three Weights
|
11 pages,2 tables
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Linear codes have been an interesting subject of study for many years.
Recently, linear codes with few weights have been constructed and extensively
studied. In this paper, for an odd prime p, a class of three-weight linear
codes over Fp are constructed. The weight distributions of the linear codes are
settled. These codes have applications in authentication codes, association
schemes and data storage systems.
|
[
{
"created": "Sat, 12 Dec 2015 02:42:24 GMT",
"version": "v1"
},
{
"created": "Wed, 23 Dec 2015 03:32:36 GMT",
"version": "v2"
}
] |
2015-12-24
|
[
[
"Wang",
"Qiuyan",
""
],
[
"Li",
"Fei",
""
],
[
"Lin",
"Dongdai",
""
]
] |
Linear codes have been an interesting subject of study for many years. Recently, linear codes with few weights have been constructed and extensively studied. In this paper, for an odd prime p, a class of three-weight linear codes over Fp are constructed. The weight distributions of the linear codes are settled. These codes have applications in authentication codes, association schemes and data storage systems.
|
2206.12011
|
Zeynep K
|
Zeynep K and Bobak Nazer
|
Detecting Correlated Gaussian Databases
|
26 pages, 4 figures
| null | null | null |
cs.IT math.IT math.ST stat.TH
|
http://creativecommons.org/licenses/by/4.0/
|
This paper considers the problem of detecting whether two databases, each
consisting of $n$ users with $d$ Gaussian features, are correlated. Under the
null hypothesis, the databases are independent. Under the alternate hypothesis,
the features are correlated across databases, under an unknown row permutation.
A simple test is developed to show that detection is achievable above $\rho^2
\approx \frac{1}{d}$. For the converse, the truncated second moment method is
used to establish that detection is impossible below roughly $\rho^2 \approx
\frac{1}{d\sqrt{n}}$. These results are compared to the corresponding recovery
problem, where the goal is to decode the row permutation, and a converse bound
of roughly $\rho^2 \approx 1 - n^{-4/d}$ has been previously shown. For certain
choices of parameters, the detection achievability bound outperforms this
recovery converse bound, demonstrating that detection can be easier than
recovery in this scenario.
|
[
{
"created": "Thu, 23 Jun 2022 23:08:24 GMT",
"version": "v1"
}
] |
2022-06-27
|
[
[
"K",
"Zeynep",
""
],
[
"Nazer",
"Bobak",
""
]
] |
This paper considers the problem of detecting whether two databases, each consisting of $n$ users with $d$ Gaussian features, are correlated. Under the null hypothesis, the databases are independent. Under the alternate hypothesis, the features are correlated across databases, under an unknown row permutation. A simple test is developed to show that detection is achievable above $\rho^2 \approx \frac{1}{d}$. For the converse, the truncated second moment method is used to establish that detection is impossible below roughly $\rho^2 \approx \frac{1}{d\sqrt{n}}$. These results are compared to the corresponding recovery problem, where the goal is to decode the row permutation, and a converse bound of roughly $\rho^2 \approx 1 - n^{-4/d}$ has been previously shown. For certain choices of parameters, the detection achievability bound outperforms this recovery converse bound, demonstrating that detection can be easier than recovery in this scenario.
|
2208.05777
|
Shaina Raza Dr.
|
Shaina Raza, Deepak John Reji, Chen Ding
|
Dbias: Detecting biases and ensuring Fairness in news articles
|
Accepted for publication in International Journal of Data Science and
Analytics
| null | null | null |
cs.IR
|
http://creativecommons.org/licenses/by/4.0/
|
Because of the increasing use of data-centric systems and algorithms in
machine learning, the topic of fairness is receiving a lot of attention in the
academic and broader literature. This paper introduces Dbias
(https://pypi.org/project/Dbias/), an open-source Python package for ensuring
fairness in news articles. Dbias can take any text to determine if it is
biased. Then, it detects biased words in the text, masks them, and suggests a
set of sentences with new words that are bias-free or at least less biased. We
conduct extensive experiments to assess the performance of Dbias. To see how
well our approach works, we compare it to the existing fairness models. We also
test the individual components of Dbias to see how effective they are. The
experimental results show that Dbias outperforms all the baselines in terms of
accuracy and fairness. We make this package (Dbias) as publicly available for
the developers and practitioners to mitigate biases in textual data (such as
news articles), as well as to encourage extension of this work.
|
[
{
"created": "Thu, 11 Aug 2022 12:14:06 GMT",
"version": "v1"
}
] |
2022-08-12
|
[
[
"Raza",
"Shaina",
""
],
[
"Reji",
"Deepak John",
""
],
[
"Ding",
"Chen",
""
]
] |
Because of the increasing use of data-centric systems and algorithms in machine learning, the topic of fairness is receiving a lot of attention in the academic and broader literature. This paper introduces Dbias (https://pypi.org/project/Dbias/), an open-source Python package for ensuring fairness in news articles. Dbias can take any text to determine if it is biased. Then, it detects biased words in the text, masks them, and suggests a set of sentences with new words that are bias-free or at least less biased. We conduct extensive experiments to assess the performance of Dbias. To see how well our approach works, we compare it to the existing fairness models. We also test the individual components of Dbias to see how effective they are. The experimental results show that Dbias outperforms all the baselines in terms of accuracy and fairness. We make this package (Dbias) as publicly available for the developers and practitioners to mitigate biases in textual data (such as news articles), as well as to encourage extension of this work.
|
2104.00837
|
Pingchuan Ma
|
Pingchuan Ma, Tao Du, John Z. Zhang, Kui Wu, Andrew Spielberg, Robert
K. Katzschmann, Wojciech Matusik
|
DiffAqua: A Differentiable Computational Design Pipeline for Soft
Underwater Swimmers with Shape Interpolation
|
ACM SIGGRAPH 2021. Homepage: http://diffaqua.csail.mit.edu/
| null |
10.1145/3450626.3459832
| null |
cs.LG cs.GR cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The computational design of soft underwater swimmers is challenging because
of the high degrees of freedom in soft-body modeling. In this paper, we present
a differentiable pipeline for co-designing a soft swimmer's geometry and
controller. Our pipeline unlocks gradient-based algorithms for discovering
novel swimmer designs more efficiently than traditional gradient-free
solutions. We propose Wasserstein barycenters as a basis for the geometric
design of soft underwater swimmers since it is differentiable and can naturally
interpolate between bio-inspired base shapes via optimal transport. By
combining this design space with differentiable simulation and control, we can
efficiently optimize a soft underwater swimmer's performance with fewer
simulations than baseline methods. We demonstrate the efficacy of our method on
various design problems such as fast, stable, and energy-efficient swimming and
demonstrate applicability to multi-objective design.
|
[
{
"created": "Fri, 2 Apr 2021 01:18:15 GMT",
"version": "v1"
},
{
"created": "Wed, 5 May 2021 18:58:41 GMT",
"version": "v2"
}
] |
2021-05-07
|
[
[
"Ma",
"Pingchuan",
""
],
[
"Du",
"Tao",
""
],
[
"Zhang",
"John Z.",
""
],
[
"Wu",
"Kui",
""
],
[
"Spielberg",
"Andrew",
""
],
[
"Katzschmann",
"Robert K.",
""
],
[
"Matusik",
"Wojciech",
""
]
] |
The computational design of soft underwater swimmers is challenging because of the high degrees of freedom in soft-body modeling. In this paper, we present a differentiable pipeline for co-designing a soft swimmer's geometry and controller. Our pipeline unlocks gradient-based algorithms for discovering novel swimmer designs more efficiently than traditional gradient-free solutions. We propose Wasserstein barycenters as a basis for the geometric design of soft underwater swimmers since it is differentiable and can naturally interpolate between bio-inspired base shapes via optimal transport. By combining this design space with differentiable simulation and control, we can efficiently optimize a soft underwater swimmer's performance with fewer simulations than baseline methods. We demonstrate the efficacy of our method on various design problems such as fast, stable, and energy-efficient swimming and demonstrate applicability to multi-objective design.
|
2111.03144
|
Abhinav Agrawal
|
Abhinav Agrawal, Justin Domke
|
Amortized Variational Inference for Simple Hierarchical Models
|
Neural Information Processing Systems (NeurIPS) 2021
| null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
It is difficult to use subsampling with variational inference in hierarchical
models since the number of local latent variables scales with the dataset.
Thus, inference in hierarchical models remains a challenge at large scale. It
is helpful to use a variational family with structure matching the posterior,
but optimization is still slow due to the huge number of local distributions.
Instead, this paper suggests an amortized approach where shared parameters
simultaneously represent all local distributions. This approach is similarly
accurate as using a given joint distribution (e.g., a full-rank Gaussian) but
is feasible on datasets that are several orders of magnitude larger. It is also
dramatically faster than using a structured variational distribution.
|
[
{
"created": "Thu, 4 Nov 2021 20:29:12 GMT",
"version": "v1"
}
] |
2021-11-08
|
[
[
"Agrawal",
"Abhinav",
""
],
[
"Domke",
"Justin",
""
]
] |
It is difficult to use subsampling with variational inference in hierarchical models since the number of local latent variables scales with the dataset. Thus, inference in hierarchical models remains a challenge at large scale. It is helpful to use a variational family with structure matching the posterior, but optimization is still slow due to the huge number of local distributions. Instead, this paper suggests an amortized approach where shared parameters simultaneously represent all local distributions. This approach is similarly accurate as using a given joint distribution (e.g., a full-rank Gaussian) but is feasible on datasets that are several orders of magnitude larger. It is also dramatically faster than using a structured variational distribution.
|
1802.07779
|
Daniel DeFreez
|
Daniel DeFreez, Aditya V. Thakur, Cindy Rubio-Gonz\'alez
|
Path-Based Function Embedding and its Application to Specification
Mining
|
11 pages, 8 figures
| null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Identifying the relationships among program elements is useful for program
understanding, debugging, and analysis. One such relationship is synonymy.
Function synonyms are functions that play a similar role in code, e.g.
functions that perform initialization for different device drivers, or
functions that implement different symmetric-key encryption schemes. Function
synonyms are not necessarily semantically equivalent and can be syntactically
dissimilar; consequently, approaches for identifying code clones or functional
equivalence cannot be used to identify them. This paper presents func2vec, an
algorithm that maps each function to a vector in a vector space such that
function synonyms are grouped together. We compute the function embedding by
training a neural network on sentences generated from random walks over an
encoding of the program as a labeled pushdown system (l-PDS). We demonstrate
that func2vec is effective at identifying function synonyms in the Linux
kernel. Furthermore, we show how function synonyms enable mining error-handling
specifications with high support in Linux file systems and drivers.
|
[
{
"created": "Wed, 21 Feb 2018 20:02:52 GMT",
"version": "v1"
},
{
"created": "Sun, 25 Feb 2018 04:22:50 GMT",
"version": "v2"
}
] |
2018-02-27
|
[
[
"DeFreez",
"Daniel",
""
],
[
"Thakur",
"Aditya V.",
""
],
[
"Rubio-González",
"Cindy",
""
]
] |
Identifying the relationships among program elements is useful for program understanding, debugging, and analysis. One such relationship is synonymy. Function synonyms are functions that play a similar role in code, e.g. functions that perform initialization for different device drivers, or functions that implement different symmetric-key encryption schemes. Function synonyms are not necessarily semantically equivalent and can be syntactically dissimilar; consequently, approaches for identifying code clones or functional equivalence cannot be used to identify them. This paper presents func2vec, an algorithm that maps each function to a vector in a vector space such that function synonyms are grouped together. We compute the function embedding by training a neural network on sentences generated from random walks over an encoding of the program as a labeled pushdown system (l-PDS). We demonstrate that func2vec is effective at identifying function synonyms in the Linux kernel. Furthermore, we show how function synonyms enable mining error-handling specifications with high support in Linux file systems and drivers.
|
1404.3920
|
Aske Plaat
|
Catholijn Jonker, Joost Broekens, Aske Plaat
|
Virtual Reflexes
| null | null | null | null |
cs.CY cs.HC
|
http://creativecommons.org/licenses/by/3.0/
|
Virtual Reality is used successfully to treat people for regular phobias. A
new challenge is to develop Virtual Reality Exposure Training for social
skills. Virtual actors in such systems have to show appropriate social behavior
including emotions, gaze, and keeping distance. The behavior must be realistic
and real-time. Current approaches consist of four steps: 1) trainee social
signal detection, 2) cognitive-affective interpretation, 3) determination of
the appropriate bodily responses, and 4) actuation. The "cognitive" detour of
such approaches does not match the directness of human bodily reflexes and
causes unrealistic responses and delay. Instead, we propose virtual reflexes as
concurrent sensory-motor processes to control virtual actors. Here we present a
virtual reflexes architecture, explain how emotion and cognitive modulation are
embedded, detail its workings, and give an example description of an aggression
training application.
|
[
{
"created": "Mon, 14 Apr 2014 14:07:09 GMT",
"version": "v1"
}
] |
2014-04-16
|
[
[
"Jonker",
"Catholijn",
""
],
[
"Broekens",
"Joost",
""
],
[
"Plaat",
"Aske",
""
]
] |
Virtual Reality is used successfully to treat people for regular phobias. A new challenge is to develop Virtual Reality Exposure Training for social skills. Virtual actors in such systems have to show appropriate social behavior including emotions, gaze, and keeping distance. The behavior must be realistic and real-time. Current approaches consist of four steps: 1) trainee social signal detection, 2) cognitive-affective interpretation, 3) determination of the appropriate bodily responses, and 4) actuation. The "cognitive" detour of such approaches does not match the directness of human bodily reflexes and causes unrealistic responses and delay. Instead, we propose virtual reflexes as concurrent sensory-motor processes to control virtual actors. Here we present a virtual reflexes architecture, explain how emotion and cognitive modulation are embedded, detail its workings, and give an example description of an aggression training application.
|
1908.07018
|
Ayush Maheshwari
|
Ayush Maheshwari, Hrishikesh Patel, Nandan Rathod, Ritesh Kumar,
Ganesh Ramakrishnan and Pushpak Bhattacharyya
|
Tale of tails using rule augmented sequence labeling for event
extraction
|
9 pages, 4 figures, 6 tables
|
StarAI Workshop at AAAI 2020
| null | null |
cs.IR cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The problem of event extraction is a relatively difficult task for low
resource languages due to the non-availability of sufficient annotated data.
Moreover, the task becomes complex for tail (rarely occurring) labels wherein
extremely less data is available. In this paper, we present a new dataset
(InDEE-2019) in the disaster domain for multiple Indic languages, collected
from news websites. Using this dataset, we evaluate several rule-based
mechanisms to augment deep learning based models. We formulate our problem of
event extraction as a sequence labeling task and perform extensive experiments
to study and understand the effectiveness of different approaches. We further
show that tail labels can be easily incorporated by creating new rules without
the requirement of large annotated data.
|
[
{
"created": "Mon, 19 Aug 2019 18:43:06 GMT",
"version": "v1"
},
{
"created": "Thu, 22 Aug 2019 06:10:02 GMT",
"version": "v2"
},
{
"created": "Fri, 31 Jan 2020 07:36:09 GMT",
"version": "v3"
}
] |
2020-11-23
|
[
[
"Maheshwari",
"Ayush",
""
],
[
"Patel",
"Hrishikesh",
""
],
[
"Rathod",
"Nandan",
""
],
[
"Kumar",
"Ritesh",
""
],
[
"Ramakrishnan",
"Ganesh",
""
],
[
"Bhattacharyya",
"Pushpak",
""
]
] |
The problem of event extraction is a relatively difficult task for low resource languages due to the non-availability of sufficient annotated data. Moreover, the task becomes complex for tail (rarely occurring) labels wherein extremely less data is available. In this paper, we present a new dataset (InDEE-2019) in the disaster domain for multiple Indic languages, collected from news websites. Using this dataset, we evaluate several rule-based mechanisms to augment deep learning based models. We formulate our problem of event extraction as a sequence labeling task and perform extensive experiments to study and understand the effectiveness of different approaches. We further show that tail labels can be easily incorporated by creating new rules without the requirement of large annotated data.
|
1204.0535
|
S Muthukrishnan
|
Yishay Mansour, S. Muthukrishnan and Noam Nisan
|
Doubleclick Ad Exchange Auction
| null | null | null | null |
cs.GT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Display advertisements on the web are sold via ad exchanges that use real
time auction. We describe the challenges of designing a suitable auction, and
present a simple auction called the Optional Second Price (OSP) auction that is
currently used in Doubleclick Ad Exchange.
|
[
{
"created": "Mon, 2 Apr 2012 20:56:53 GMT",
"version": "v1"
}
] |
2012-04-04
|
[
[
"Mansour",
"Yishay",
""
],
[
"Muthukrishnan",
"S.",
""
],
[
"Nisan",
"Noam",
""
]
] |
Display advertisements on the web are sold via ad exchanges that use real time auction. We describe the challenges of designing a suitable auction, and present a simple auction called the Optional Second Price (OSP) auction that is currently used in Doubleclick Ad Exchange.
|
1410.4011
|
EPTCS
|
Amir M. Ben-Amram, Aviad Pineles
|
Flowchart Programs, Regular Expressions, and Decidability of Polynomial
Growth-Rate
|
In Proceedings VPT 2016, arXiv:1607.01835
|
EPTCS 216, 2016, pp. 24-49
|
10.4204/EPTCS.216.2
| null |
cs.PL cs.FL cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a new method for inferring complexity properties for a class of
programs in the form of flowcharts annotated with loop information.
Specifically, our method can (soundly and completely) decide if computed values
are polynomially bounded as a function of the input; and similarly for the
running time. Such complexity properties are undecidable for a Turing-complete
programming language, and a common work-around in program analysis is to settle
for sound but incomplete solutions. In contrast, we consider a class of
programs that is Turing-incomplete, but strong enough to include several
challenges for this kind of analysis. For a related language that has
well-structured syntax, similar to Meyer and Ritchie's LOOP programs, the
problem has been previously proved to be decidable. The analysis relied on the
compositionality of programs, hence the challenge in obtaining similar results
for flowchart programs with arbitrary control-flow graphs. Our answer to the
challenge is twofold: first, we propose a class of loop-annotated flowcharts,
which is more general than the class of flowcharts that directly represent
structured programs; secondly, we present a technique to reuse the ideas from
the work on tructured programs and apply them to such flowcharts. The technique
is inspired by the classic translation of non-deterministic automata to regular
expressions, but we obviate the exponential cost of constructing such an
expression, obtaining a polynomial-time analysis. These ideas may well be
applicable to other analysis problems.
|
[
{
"created": "Wed, 15 Oct 2014 11:16:16 GMT",
"version": "v1"
},
{
"created": "Sun, 13 Mar 2016 14:40:47 GMT",
"version": "v2"
},
{
"created": "Sun, 20 Mar 2016 10:01:27 GMT",
"version": "v3"
},
{
"created": "Wed, 1 Jun 2016 16:30:18 GMT",
"version": "v4"
},
{
"created": "Fri, 8 Jul 2016 05:30:33 GMT",
"version": "v5"
}
] |
2016-07-11
|
[
[
"Ben-Amram",
"Amir M.",
""
],
[
"Pineles",
"Aviad",
""
]
] |
We present a new method for inferring complexity properties for a class of programs in the form of flowcharts annotated with loop information. Specifically, our method can (soundly and completely) decide if computed values are polynomially bounded as a function of the input; and similarly for the running time. Such complexity properties are undecidable for a Turing-complete programming language, and a common work-around in program analysis is to settle for sound but incomplete solutions. In contrast, we consider a class of programs that is Turing-incomplete, but strong enough to include several challenges for this kind of analysis. For a related language that has well-structured syntax, similar to Meyer and Ritchie's LOOP programs, the problem has been previously proved to be decidable. The analysis relied on the compositionality of programs, hence the challenge in obtaining similar results for flowchart programs with arbitrary control-flow graphs. Our answer to the challenge is twofold: first, we propose a class of loop-annotated flowcharts, which is more general than the class of flowcharts that directly represent structured programs; secondly, we present a technique to reuse the ideas from the work on tructured programs and apply them to such flowcharts. The technique is inspired by the classic translation of non-deterministic automata to regular expressions, but we obviate the exponential cost of constructing such an expression, obtaining a polynomial-time analysis. These ideas may well be applicable to other analysis problems.
|
2312.01656
|
Yilin Ye
|
Yilin Ye, Qian Zhu, Shishi Xiao, Kang Zhang, Wei Zeng
|
The Contemporary Art of Image Search: Iterative User Intent Expansion
via Vision-Language Model
|
Accepted by The 2024 ACM SIGCHI Conference on Computer-Supported
Cooperative Work & Social Computing (CSCW) (Proc. CSCW 2024)
| null | null | null |
cs.IR cs.AI cs.CV cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
Image search is an essential and user-friendly method to explore vast
galleries of digital images. However, existing image search methods heavily
rely on proximity measurements like tag matching or image similarity, requiring
precise user inputs for satisfactory results. To meet the growing demand for a
contemporary image search engine that enables accurate comprehension of users'
search intentions, we introduce an innovative user intent expansion framework.
Our framework leverages visual-language models to parse and compose multi-modal
user inputs to provide more accurate and satisfying results. It comprises
two-stage processes: 1) a parsing stage that incorporates a language parsing
module with large language models to enhance the comprehension of textual
inputs, along with a visual parsing module that integrates an interactive
segmentation module to swiftly identify detailed visual elements within images;
and 2) a logic composition stage that combines multiple user search intents
into a unified logic expression for more sophisticated operations in complex
searching scenarios. Moreover, the intent expansion framework enables users to
perform flexible contextualized interactions with the search results to further
specify or adjust their detailed search intents iteratively. We implemented the
framework into an image search system for NFT (non-fungible token) search and
conducted a user study to evaluate its usability and novel properties. The
results indicate that the proposed framework significantly improves users'
image search experience. Particularly the parsing and contextualized
interactions prove useful in allowing users to express their search intents
more accurately and engage in a more enjoyable iterative search experience.
|
[
{
"created": "Mon, 4 Dec 2023 06:14:25 GMT",
"version": "v1"
},
{
"created": "Tue, 5 Dec 2023 02:24:38 GMT",
"version": "v2"
}
] |
2023-12-06
|
[
[
"Ye",
"Yilin",
""
],
[
"Zhu",
"Qian",
""
],
[
"Xiao",
"Shishi",
""
],
[
"Zhang",
"Kang",
""
],
[
"Zeng",
"Wei",
""
]
] |
Image search is an essential and user-friendly method to explore vast galleries of digital images. However, existing image search methods heavily rely on proximity measurements like tag matching or image similarity, requiring precise user inputs for satisfactory results. To meet the growing demand for a contemporary image search engine that enables accurate comprehension of users' search intentions, we introduce an innovative user intent expansion framework. Our framework leverages visual-language models to parse and compose multi-modal user inputs to provide more accurate and satisfying results. It comprises two-stage processes: 1) a parsing stage that incorporates a language parsing module with large language models to enhance the comprehension of textual inputs, along with a visual parsing module that integrates an interactive segmentation module to swiftly identify detailed visual elements within images; and 2) a logic composition stage that combines multiple user search intents into a unified logic expression for more sophisticated operations in complex searching scenarios. Moreover, the intent expansion framework enables users to perform flexible contextualized interactions with the search results to further specify or adjust their detailed search intents iteratively. We implemented the framework into an image search system for NFT (non-fungible token) search and conducted a user study to evaluate its usability and novel properties. The results indicate that the proposed framework significantly improves users' image search experience. Particularly the parsing and contextualized interactions prove useful in allowing users to express their search intents more accurately and engage in a more enjoyable iterative search experience.
|
2302.05743
|
Zian Li
|
Zian Li, Xiyuan Wang, Yinan Huang, Muhan Zhang
|
Is Distance Matrix Enough for Geometric Deep Learning?
|
To be published in NeurIPS2023
| null | null | null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Graph Neural Networks (GNNs) are often used for tasks involving the 3D
geometry of a given graph, such as molecular dynamics simulation. While
incorporating Euclidean distance into Message Passing Neural Networks (referred
to as Vanilla DisGNN) is a straightforward way to learn the geometry, it has
been demonstrated that Vanilla DisGNN is geometrically incomplete. In this
work, we first construct families of novel and symmetric geometric graphs that
Vanilla DisGNN cannot distinguish even when considering all-pair distances,
which greatly expands the existing counterexample families. Our counterexamples
show the inherent limitation of Vanilla DisGNN to capture symmetric geometric
structures. We then propose $k$-DisGNNs, which can effectively exploit the rich
geometry contained in the distance matrix. We demonstrate the high expressive
power of $k$-DisGNNs from three perspectives: 1. They can learn high-order
geometric information that cannot be captured by Vanilla DisGNN. 2. They can
unify some existing well-designed geometric models. 3. They are universal
function approximators from geometric graphs to scalars (when $k\geq 2$) and
vectors (when $k\geq 3$). Most importantly, we establish a connection between
geometric deep learning (GDL) and traditional graph representation learning
(GRL), showing that those highly expressive GNN models originally designed for
GRL can also be applied to GDL with impressive performance, and that existing
complicated, equivariant models are not the only solution. Experiments verify
our theory. Our $k$-DisGNNs achieve many new state-of-the-art results on MD17.
|
[
{
"created": "Sat, 11 Feb 2023 16:54:20 GMT",
"version": "v1"
},
{
"created": "Mon, 3 Apr 2023 15:31:56 GMT",
"version": "v2"
},
{
"created": "Fri, 14 Apr 2023 14:37:06 GMT",
"version": "v3"
},
{
"created": "Fri, 2 Jun 2023 08:12:40 GMT",
"version": "v4"
},
{
"created": "Tue, 31 Oct 2023 03:07:53 GMT",
"version": "v5"
}
] |
2023-11-01
|
[
[
"Li",
"Zian",
""
],
[
"Wang",
"Xiyuan",
""
],
[
"Huang",
"Yinan",
""
],
[
"Zhang",
"Muhan",
""
]
] |
Graph Neural Networks (GNNs) are often used for tasks involving the 3D geometry of a given graph, such as molecular dynamics simulation. While incorporating Euclidean distance into Message Passing Neural Networks (referred to as Vanilla DisGNN) is a straightforward way to learn the geometry, it has been demonstrated that Vanilla DisGNN is geometrically incomplete. In this work, we first construct families of novel and symmetric geometric graphs that Vanilla DisGNN cannot distinguish even when considering all-pair distances, which greatly expands the existing counterexample families. Our counterexamples show the inherent limitation of Vanilla DisGNN to capture symmetric geometric structures. We then propose $k$-DisGNNs, which can effectively exploit the rich geometry contained in the distance matrix. We demonstrate the high expressive power of $k$-DisGNNs from three perspectives: 1. They can learn high-order geometric information that cannot be captured by Vanilla DisGNN. 2. They can unify some existing well-designed geometric models. 3. They are universal function approximators from geometric graphs to scalars (when $k\geq 2$) and vectors (when $k\geq 3$). Most importantly, we establish a connection between geometric deep learning (GDL) and traditional graph representation learning (GRL), showing that those highly expressive GNN models originally designed for GRL can also be applied to GDL with impressive performance, and that existing complicated, equivariant models are not the only solution. Experiments verify our theory. Our $k$-DisGNNs achieve many new state-of-the-art results on MD17.
|
1803.02123
|
Per Skarin
|
Per Skarin, William T\"arneberg, Karl-Erik {\AA}rzen, Maria Kihl
|
Towards Mission-Critical Control at the Edge and Over 5G
|
June 18th: Upload the final version as submitted to IEEE Services
[EDGE] 2018 on May 16th (updated abstract and some wording, results
unchanged)
| null | null | null |
cs.SY cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With the emergence of industrial IoT and cloud computing, and the advent of
5G and edge clouds, there are ambitious expectations on elasticity, economies
of scale, and fast time to market for demanding use cases in the next
generation of ICT networks. Responsiveness and reliability of wireless
communication links and services in the cloud are set to improve significantly
as the concept of edge clouds is becoming more prevalent. To enable industrial
uptake we must provide cloud capacity in the networks but also a sufficient
level of simplicity and self-sustainability in the software platforms. In this
paper, we present a research test-bed built to study mission-critical control
over the distributed edge cloud. We evaluate system properties using a
conventional control application in the form of a Model Predictive Controller.
Our cloud platform provides the means to continuously operate our
mission-critical application while seamlessly relocating computations across
geographically dispersed compute nodes. Through our use of 5G wireless radio,
we allow for mobility and reliably provide compute resources with low latency,
at the edge. The primary contribution of this paper is a state-of-the art,
fully operational test-bed showing the potential for merged IoT, 5G, and cloud.
We also provide an evaluation of the system while operating a mission-critical
application and provide an outlook on a novel research direction.
|
[
{
"created": "Tue, 6 Mar 2018 11:31:59 GMT",
"version": "v1"
},
{
"created": "Mon, 18 Jun 2018 08:25:56 GMT",
"version": "v2"
}
] |
2018-06-19
|
[
[
"Skarin",
"Per",
""
],
[
"Tärneberg",
"William",
""
],
[
"Årzen",
"Karl-Erik",
""
],
[
"Kihl",
"Maria",
""
]
] |
With the emergence of industrial IoT and cloud computing, and the advent of 5G and edge clouds, there are ambitious expectations on elasticity, economies of scale, and fast time to market for demanding use cases in the next generation of ICT networks. Responsiveness and reliability of wireless communication links and services in the cloud are set to improve significantly as the concept of edge clouds is becoming more prevalent. To enable industrial uptake we must provide cloud capacity in the networks but also a sufficient level of simplicity and self-sustainability in the software platforms. In this paper, we present a research test-bed built to study mission-critical control over the distributed edge cloud. We evaluate system properties using a conventional control application in the form of a Model Predictive Controller. Our cloud platform provides the means to continuously operate our mission-critical application while seamlessly relocating computations across geographically dispersed compute nodes. Through our use of 5G wireless radio, we allow for mobility and reliably provide compute resources with low latency, at the edge. The primary contribution of this paper is a state-of-the art, fully operational test-bed showing the potential for merged IoT, 5G, and cloud. We also provide an evaluation of the system while operating a mission-critical application and provide an outlook on a novel research direction.
|
2205.14198
|
Marina Knittel
|
Marina Knittel, Max Springer, John P. Dickerson, MohammadTaghi
Hajiaghayi
|
Generalized Reductions: Making any Hierarchical Clustering Fair and
Balanced with Low Cost
| null | null | null | null |
cs.LG cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Clustering is a fundamental building block of modern statistical analysis
pipelines. Fair clustering has seen much attention from the machine learning
community in recent years. We are some of the first to study fairness in the
context of hierarchical clustering, after the results of Ahmadian et al. from
NeurIPS in 2020. We evaluate our results using Dasgupta's cost function,
perhaps one of the most prevalent theoretical metrics for hierarchical
clustering evaluation. Our work vastly improves the previous
$O(n^{5/6}poly\log(n))$ fair approximation for cost to a near polylogarithmic
$O(n^\delta poly\log(n))$ fair approximation for any constant $\delta\in(0,1)$.
This result establishes a cost-fairness tradeoff and extends to broader
fairness constraints than the previous work. We also show how to alter existing
hierarchical clusterings to guarantee fairness and cluster balance across any
level in the hierarchy.
|
[
{
"created": "Fri, 27 May 2022 19:04:00 GMT",
"version": "v1"
},
{
"created": "Tue, 9 May 2023 21:59:01 GMT",
"version": "v2"
}
] |
2023-05-11
|
[
[
"Knittel",
"Marina",
""
],
[
"Springer",
"Max",
""
],
[
"Dickerson",
"John P.",
""
],
[
"Hajiaghayi",
"MohammadTaghi",
""
]
] |
Clustering is a fundamental building block of modern statistical analysis pipelines. Fair clustering has seen much attention from the machine learning community in recent years. We are some of the first to study fairness in the context of hierarchical clustering, after the results of Ahmadian et al. from NeurIPS in 2020. We evaluate our results using Dasgupta's cost function, perhaps one of the most prevalent theoretical metrics for hierarchical clustering evaluation. Our work vastly improves the previous $O(n^{5/6}poly\log(n))$ fair approximation for cost to a near polylogarithmic $O(n^\delta poly\log(n))$ fair approximation for any constant $\delta\in(0,1)$. This result establishes a cost-fairness tradeoff and extends to broader fairness constraints than the previous work. We also show how to alter existing hierarchical clusterings to guarantee fairness and cluster balance across any level in the hierarchy.
|
2307.03003
|
Johannes Jakubik
|
Johannes Jakubik, Daniel Weber, Patrick Hemmer, Michael V\"ossing,
Gerhard Satzger
|
Improving the Efficiency of Human-in-the-Loop Systems: Adding Artificial
to Human Experts
|
Accepted at International Conference on Wirtschaftsinformatik, 2023
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Information systems increasingly leverage artificial intelligence (AI) and
machine learning (ML) to generate value from vast amounts of data. However, ML
models are imperfect and can generate incorrect classifications. Hence,
human-in-the-loop (HITL) extensions to ML models add a human review for
instances that are difficult to classify. This study argues that continuously
relying on human experts to handle difficult model classifications leads to a
strong increase in human effort, which strains limited resources. To address
this issue, we propose a hybrid system that creates artificial experts that
learn to classify data instances from unknown classes previously reviewed by
human experts. Our hybrid system assesses which artificial expert is suitable
for classifying an instance from an unknown class and automatically assigns it.
Over time, this reduces human effort and increases the efficiency of the
system. Our experiments demonstrate that our approach outperforms traditional
HITL systems for several benchmarks on image classification.
|
[
{
"created": "Thu, 6 Jul 2023 14:06:23 GMT",
"version": "v1"
},
{
"created": "Fri, 7 Jul 2023 06:39:38 GMT",
"version": "v2"
}
] |
2023-07-10
|
[
[
"Jakubik",
"Johannes",
""
],
[
"Weber",
"Daniel",
""
],
[
"Hemmer",
"Patrick",
""
],
[
"Vössing",
"Michael",
""
],
[
"Satzger",
"Gerhard",
""
]
] |
Information systems increasingly leverage artificial intelligence (AI) and machine learning (ML) to generate value from vast amounts of data. However, ML models are imperfect and can generate incorrect classifications. Hence, human-in-the-loop (HITL) extensions to ML models add a human review for instances that are difficult to classify. This study argues that continuously relying on human experts to handle difficult model classifications leads to a strong increase in human effort, which strains limited resources. To address this issue, we propose a hybrid system that creates artificial experts that learn to classify data instances from unknown classes previously reviewed by human experts. Our hybrid system assesses which artificial expert is suitable for classifying an instance from an unknown class and automatically assigns it. Over time, this reduces human effort and increases the efficiency of the system. Our experiments demonstrate that our approach outperforms traditional HITL systems for several benchmarks on image classification.
|
1411.6749
|
Tie (Tony) Luo
|
Tie Luo and Mehul Motani and Vikram Srinivasan
|
Analyzing DISH for Multi-Channel MAC Protocols in Wireless Networks
|
Multi-channel multi-hop networks, availability of cooperation,
cooperative protocol, distributed information sharing, ACM MobiHoc, May 2008
| null | null | null |
cs.NI cs.PF
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
For long, node cooperation has been exploited as a data relaying mechanism.
However, the wireless channel allows for much richer interaction between nodes.
One such scenario is in a multi-channel environment, where transmitter-receiver
pairs may make incorrect decisions (e.g., in selecting channels) but idle
neighbors could help by sharing information to prevent undesirable consequences
(e.g., data collisions). This represents a Distributed Information SHaring
(DISH) mechanism for cooperation and suggests new ways of designing cooperative
protocols. However, what is lacking is a theoretical understanding of this new
notion of cooperation. In this paper, we view cooperation as a network resource
and evaluate the availability of cooperation via a metric, $p_{co}$, the
probability of obtaining cooperation. First, we analytically evaluate $p_{co}$
in the context of multi-channel multi-hop wireless networks. Second, we verify
our analysis via simulations and the results show that our analysis accurately
characterizes the behavior of $p_{co}$ as a function of underlying network
parameters. This step also yields important insights into DISH with respect to
network dynamics. Third, we investigate the correlation between $p_{co}$ and
network performance in terms of collision rate, packet delay, and throughput.
The results indicate a near-linear relationship, which may significantly
simplify performance analysis for cooperative networks and suggests that
$p_{co}$ be used as an appropriate performance indicator itself. Throughout
this work, we utilize, as appropriate, three different DISH contexts ---
model-based DISH, ideal DISH, and real DISH --- to explore $p_{co}$.
|
[
{
"created": "Tue, 25 Nov 2014 06:58:51 GMT",
"version": "v1"
}
] |
2014-11-26
|
[
[
"Luo",
"Tie",
""
],
[
"Motani",
"Mehul",
""
],
[
"Srinivasan",
"Vikram",
""
]
] |
For long, node cooperation has been exploited as a data relaying mechanism. However, the wireless channel allows for much richer interaction between nodes. One such scenario is in a multi-channel environment, where transmitter-receiver pairs may make incorrect decisions (e.g., in selecting channels) but idle neighbors could help by sharing information to prevent undesirable consequences (e.g., data collisions). This represents a Distributed Information SHaring (DISH) mechanism for cooperation and suggests new ways of designing cooperative protocols. However, what is lacking is a theoretical understanding of this new notion of cooperation. In this paper, we view cooperation as a network resource and evaluate the availability of cooperation via a metric, $p_{co}$, the probability of obtaining cooperation. First, we analytically evaluate $p_{co}$ in the context of multi-channel multi-hop wireless networks. Second, we verify our analysis via simulations and the results show that our analysis accurately characterizes the behavior of $p_{co}$ as a function of underlying network parameters. This step also yields important insights into DISH with respect to network dynamics. Third, we investigate the correlation between $p_{co}$ and network performance in terms of collision rate, packet delay, and throughput. The results indicate a near-linear relationship, which may significantly simplify performance analysis for cooperative networks and suggests that $p_{co}$ be used as an appropriate performance indicator itself. Throughout this work, we utilize, as appropriate, three different DISH contexts --- model-based DISH, ideal DISH, and real DISH --- to explore $p_{co}$.
|
1803.01166
|
Seonwook Park
|
Seonwook Park and Christoph Gebhardt and Roman R\"adle and Anna Feit
and Hana Vrzakova and Niraj Dayama and Hui-Shyong Yeo and Clemens Klokmose
and Aaron Quigley and Antti Oulasvirta and Otmar Hilliges
|
AdaM: Adapting Multi-User Interfaces for Collaborative Environments in
Real-Time
|
formatting tweaks
| null |
10.1145/3173574.3173758
| null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Developing cross-device multi-user interfaces (UIs) is a challenging problem.
There are numerous ways in which content and interactivity can be distributed.
However, good solutions must consider multiple users, their roles, their
preferences and access rights, as well as device capabilities. Manual and
rule-based solutions are tedious to create and do not scale to larger problems
nor do they adapt to dynamic changes, such as users leaving or joining an
activity. In this paper, we cast the problem of UI distribution as an
assignment problem and propose to solve it using combinatorial optimization. We
present a mixed integer programming formulation which allows real-time
applications in dynamically changing collaborative settings. It optimizes the
allocation of UI elements based on device capabilities, user roles,
preferences, and access rights. We present a proof-of-concept
designer-in-the-loop tool, allowing for quick solution exploration. Finally, we
compare our approach to traditional paper prototyping in a lab study.
|
[
{
"created": "Sat, 3 Mar 2018 14:05:07 GMT",
"version": "v1"
},
{
"created": "Thu, 29 Mar 2018 11:22:23 GMT",
"version": "v2"
}
] |
2018-03-30
|
[
[
"Park",
"Seonwook",
""
],
[
"Gebhardt",
"Christoph",
""
],
[
"Rädle",
"Roman",
""
],
[
"Feit",
"Anna",
""
],
[
"Vrzakova",
"Hana",
""
],
[
"Dayama",
"Niraj",
""
],
[
"Yeo",
"Hui-Shyong",
""
],
[
"Klokmose",
"Clemens",
""
],
[
"Quigley",
"Aaron",
""
],
[
"Oulasvirta",
"Antti",
""
],
[
"Hilliges",
"Otmar",
""
]
] |
Developing cross-device multi-user interfaces (UIs) is a challenging problem. There are numerous ways in which content and interactivity can be distributed. However, good solutions must consider multiple users, their roles, their preferences and access rights, as well as device capabilities. Manual and rule-based solutions are tedious to create and do not scale to larger problems nor do they adapt to dynamic changes, such as users leaving or joining an activity. In this paper, we cast the problem of UI distribution as an assignment problem and propose to solve it using combinatorial optimization. We present a mixed integer programming formulation which allows real-time applications in dynamically changing collaborative settings. It optimizes the allocation of UI elements based on device capabilities, user roles, preferences, and access rights. We present a proof-of-concept designer-in-the-loop tool, allowing for quick solution exploration. Finally, we compare our approach to traditional paper prototyping in a lab study.
|
2312.03357
|
Doriand Petit
|
Doriand Petit, Steve Bourgeois, Dumitru Pavel, Vincent Gay-Bellile,
Florian Chabot and Loic Barthe
|
RING-NeRF : Rethinking Inductive Biases for Versatile and Efficient
Neural Fields
|
This publication has been accepted at ECCV'24
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent advances in Neural Fields mostly rely on developing task-specific
supervision which often complicates the models. Rather than developing
hard-to-combine and specific modules, another approach generally overlooked is
to directly inject generic priors on the scene representation (also called
inductive biases) into the NeRF architecture. Based on this idea, we propose
the RING-NeRF architecture which includes two inductive biases : a continuous
multi-scale representation of the scene and an invariance of the decoder's
latent space over spatial and scale domains. We also design a single
reconstruction process that takes advantage of those inductive biases and
experimentally demonstrates on-par performances in terms of quality with
dedicated architecture on multiple tasks (anti-aliasing, few view
reconstruction, SDF reconstruction without scene-specific initialization) while
being more efficient. Moreover, RING-NeRF has the distinctive ability to
dynamically increase the resolution of the model, opening the way to adaptive
reconstruction.
|
[
{
"created": "Wed, 6 Dec 2023 08:54:04 GMT",
"version": "v1"
},
{
"created": "Thu, 14 Mar 2024 13:58:06 GMT",
"version": "v2"
},
{
"created": "Wed, 17 Jul 2024 07:47:30 GMT",
"version": "v3"
}
] |
2024-07-18
|
[
[
"Petit",
"Doriand",
""
],
[
"Bourgeois",
"Steve",
""
],
[
"Pavel",
"Dumitru",
""
],
[
"Gay-Bellile",
"Vincent",
""
],
[
"Chabot",
"Florian",
""
],
[
"Barthe",
"Loic",
""
]
] |
Recent advances in Neural Fields mostly rely on developing task-specific supervision which often complicates the models. Rather than developing hard-to-combine and specific modules, another approach generally overlooked is to directly inject generic priors on the scene representation (also called inductive biases) into the NeRF architecture. Based on this idea, we propose the RING-NeRF architecture which includes two inductive biases : a continuous multi-scale representation of the scene and an invariance of the decoder's latent space over spatial and scale domains. We also design a single reconstruction process that takes advantage of those inductive biases and experimentally demonstrates on-par performances in terms of quality with dedicated architecture on multiple tasks (anti-aliasing, few view reconstruction, SDF reconstruction without scene-specific initialization) while being more efficient. Moreover, RING-NeRF has the distinctive ability to dynamically increase the resolution of the model, opening the way to adaptive reconstruction.
|
2402.02399
|
Hao Wang
|
Hao Wang, Licheng Pan, Zhichao Chen, Degui Yang, Sen Zhang, Yifei
Yang, Xinggao Liu, Haoxuan Li, Dacheng Tao
|
FreDF: Learning to Forecast in Frequency Domain
| null | null | null | null |
cs.LG cs.AI stat.AP stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Time series modeling is uniquely challenged by the presence of
autocorrelation in both historical and label sequences. Current research
predominantly focuses on handling autocorrelation within the historical
sequence but often neglects its presence in the label sequence. Specifically,
emerging forecast models mainly conform to the direct forecast (DF) paradigm,
generating multi-step forecasts under the assumption of conditional
independence within the label sequence. This assumption disregards the inherent
autocorrelation in the label sequence, thereby limiting the performance of
DF-based models. In response to this gap, we introduce the Frequency-enhanced
Direct Forecast (FreDF), which bypasses the complexity of label autocorrelation
by learning to forecast in the frequency domain. Our experiments demonstrate
that FreDF substantially outperforms existing state-of-the-art methods
including iTransformer and is compatible with a variety of forecast models.
|
[
{
"created": "Sun, 4 Feb 2024 08:23:41 GMT",
"version": "v1"
}
] |
2024-02-06
|
[
[
"Wang",
"Hao",
""
],
[
"Pan",
"Licheng",
""
],
[
"Chen",
"Zhichao",
""
],
[
"Yang",
"Degui",
""
],
[
"Zhang",
"Sen",
""
],
[
"Yang",
"Yifei",
""
],
[
"Liu",
"Xinggao",
""
],
[
"Li",
"Haoxuan",
""
],
[
"Tao",
"Dacheng",
""
]
] |
Time series modeling is uniquely challenged by the presence of autocorrelation in both historical and label sequences. Current research predominantly focuses on handling autocorrelation within the historical sequence but often neglects its presence in the label sequence. Specifically, emerging forecast models mainly conform to the direct forecast (DF) paradigm, generating multi-step forecasts under the assumption of conditional independence within the label sequence. This assumption disregards the inherent autocorrelation in the label sequence, thereby limiting the performance of DF-based models. In response to this gap, we introduce the Frequency-enhanced Direct Forecast (FreDF), which bypasses the complexity of label autocorrelation by learning to forecast in the frequency domain. Our experiments demonstrate that FreDF substantially outperforms existing state-of-the-art methods including iTransformer and is compatible with a variety of forecast models.
|
2311.15260
|
Adam Tonderski
|
Adam Tonderski, Carl Lindstr\"om, Georg Hess, William Ljungbergh,
Lennart Svensson, Christoffer Petersson
|
NeuRAD: Neural Rendering for Autonomous Driving
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Neural radiance fields (NeRFs) have gained popularity in the autonomous
driving (AD) community. Recent methods show NeRFs' potential for closed-loop
simulation, enabling testing of AD systems, and as an advanced training data
augmentation technique. However, existing methods often require long training
times, dense semantic supervision, or lack generalizability. This, in turn,
hinders the application of NeRFs for AD at scale. In this paper, we propose
NeuRAD, a robust novel view synthesis method tailored to dynamic AD data. Our
method features simple network design, extensive sensor modeling for both
camera and lidar -- including rolling shutter, beam divergence and ray dropping
-- and is applicable to multiple datasets out of the box. We verify its
performance on five popular AD datasets, achieving state-of-the-art performance
across the board. To encourage further development, we will openly release the
NeuRAD source code. See https://github.com/georghess/NeuRAD .
|
[
{
"created": "Sun, 26 Nov 2023 10:27:22 GMT",
"version": "v1"
},
{
"created": "Tue, 5 Dec 2023 09:53:18 GMT",
"version": "v2"
},
{
"created": "Thu, 18 Apr 2024 12:44:56 GMT",
"version": "v3"
}
] |
2024-04-19
|
[
[
"Tonderski",
"Adam",
""
],
[
"Lindström",
"Carl",
""
],
[
"Hess",
"Georg",
""
],
[
"Ljungbergh",
"William",
""
],
[
"Svensson",
"Lennart",
""
],
[
"Petersson",
"Christoffer",
""
]
] |
Neural radiance fields (NeRFs) have gained popularity in the autonomous driving (AD) community. Recent methods show NeRFs' potential for closed-loop simulation, enabling testing of AD systems, and as an advanced training data augmentation technique. However, existing methods often require long training times, dense semantic supervision, or lack generalizability. This, in turn, hinders the application of NeRFs for AD at scale. In this paper, we propose NeuRAD, a robust novel view synthesis method tailored to dynamic AD data. Our method features simple network design, extensive sensor modeling for both camera and lidar -- including rolling shutter, beam divergence and ray dropping -- and is applicable to multiple datasets out of the box. We verify its performance on five popular AD datasets, achieving state-of-the-art performance across the board. To encourage further development, we will openly release the NeuRAD source code. See https://github.com/georghess/NeuRAD .
|
1207.2847
|
Kai Liu
|
Kai Liu, Hock Beng Lim
|
Positioning Accuracy Improvement via Distributed Location Estimate in
Cooperative Vehicular Networks
|
To appear in Proc. of the 15th International IEEE Conference on
Intelligent Transportation Systems (IEEE ITSC'12)
| null |
10.1109/ITSC.2012.6338743
| null |
cs.DC cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The development of cooperative vehicle safety (CVS) applications, such as
collision warnings, turning assistants, and speed advisories, etc., has
received great attention in the past few years. Accurate vehicular localization
is essential to enable these applications. In this study, motivated by the
proliferation of the Global Positioning System (GPS) devices, and the
increasing sophistication of wireless communication technologies in vehicular
networks, we propose a distributed location estimate algorithm to improve the
positioning accuracy via cooperative inter-vehicle distance measurement. In
particular, we compute the inter-vehicle distance based on raw GPS pseudorange
measurements, instead of depending on traditional radio-based ranging
techniques, which usually either suffer from high hardware cost or have
inadequate positioning accuracy. In addition, we improve the estimation of the
vehicles' locations only based on the inaccurate GPS fixes, without using any
anchors with known exact locations. The algorithm is decentralized, which
enhances its practicability in highly dynamic vehicular networks. We have
developed a simulation model to evaluate the performance of the proposed
algorithm, and the results demonstrate that the algorithm can significantly
improve the positioning accuracy.
|
[
{
"created": "Thu, 12 Jul 2012 05:27:16 GMT",
"version": "v1"
},
{
"created": "Sat, 14 Jul 2012 06:32:37 GMT",
"version": "v2"
},
{
"created": "Fri, 20 Jul 2012 08:33:23 GMT",
"version": "v3"
}
] |
2016-11-15
|
[
[
"Liu",
"Kai",
""
],
[
"Lim",
"Hock Beng",
""
]
] |
The development of cooperative vehicle safety (CVS) applications, such as collision warnings, turning assistants, and speed advisories, etc., has received great attention in the past few years. Accurate vehicular localization is essential to enable these applications. In this study, motivated by the proliferation of the Global Positioning System (GPS) devices, and the increasing sophistication of wireless communication technologies in vehicular networks, we propose a distributed location estimate algorithm to improve the positioning accuracy via cooperative inter-vehicle distance measurement. In particular, we compute the inter-vehicle distance based on raw GPS pseudorange measurements, instead of depending on traditional radio-based ranging techniques, which usually either suffer from high hardware cost or have inadequate positioning accuracy. In addition, we improve the estimation of the vehicles' locations only based on the inaccurate GPS fixes, without using any anchors with known exact locations. The algorithm is decentralized, which enhances its practicability in highly dynamic vehicular networks. We have developed a simulation model to evaluate the performance of the proposed algorithm, and the results demonstrate that the algorithm can significantly improve the positioning accuracy.
|
0802.0820
|
Jonathan Hayman
|
Jonathan Hayman and Glynn Winskel
|
Independence and concurrent separation logic
| null |
Logical Methods in Computer Science, Volume 4, Issue 1 (March 19,
2008) lmcs:1100
|
10.2168/LMCS-4(1:6)2008
| null |
cs.LO cs.PL
| null |
A compositional Petri net-based semantics is given to a simple language
allowing pointer manipulation and parallelism. The model is then applied to
give a notion of validity to the judgements made by concurrent separation logic
that emphasizes the process-environment duality inherent in such rely-guarantee
reasoning. Soundness of the rules of concurrent separation logic with respect
to this definition of validity is shown. The independence information retained
by the Petri net model is then exploited to characterize the independence of
parallel processes enforced by the logic. This is shown to permit a refinement
operation capable of changing the granularity of atomic actions.
|
[
{
"created": "Wed, 6 Feb 2008 15:39:20 GMT",
"version": "v1"
},
{
"created": "Wed, 19 Mar 2008 15:26:51 GMT",
"version": "v2"
}
] |
2015-07-01
|
[
[
"Hayman",
"Jonathan",
""
],
[
"Winskel",
"Glynn",
""
]
] |
A compositional Petri net-based semantics is given to a simple language allowing pointer manipulation and parallelism. The model is then applied to give a notion of validity to the judgements made by concurrent separation logic that emphasizes the process-environment duality inherent in such rely-guarantee reasoning. Soundness of the rules of concurrent separation logic with respect to this definition of validity is shown. The independence information retained by the Petri net model is then exploited to characterize the independence of parallel processes enforced by the logic. This is shown to permit a refinement operation capable of changing the granularity of atomic actions.
|
2101.00318
|
Xiaofeng Liu
|
Xiaofeng Liu, Xiongchang Liu, Bo Hu, Wenxuan Ji, Fangxu Xing, Jun Lu,
Jane You, C.-C. Jay Kuo, Georges El Fakhri, Jonghye Woo
|
Subtype-aware Unsupervised Domain Adaptation for Medical Diagnosis
|
Accepted to AAAI 2021
| null | null | null |
cs.CV cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Recent advances in unsupervised domain adaptation (UDA) show that
transferable prototypical learning presents a powerful means for class
conditional alignment, which encourages the closeness of cross-domain class
centroids. However, the cross-domain inner-class compactness and the underlying
fine-grained subtype structure remained largely underexplored. In this work, we
propose to adaptively carry out the fine-grained subtype-aware alignment by
explicitly enforcing the class-wise separation and subtype-wise compactness
with intermediate pseudo labels. Our key insight is that the unlabeled subtypes
of a class can be divergent to one another with different conditional and label
shifts, while inheriting the local proximity within a subtype. The cases of
with or without the prior information on subtype numbers are investigated to
discover the underlying subtype structure in an online fashion. The proposed
subtype-aware dynamic UDA achieves promising results on medical diagnosis
tasks.
|
[
{
"created": "Fri, 1 Jan 2021 21:04:50 GMT",
"version": "v1"
},
{
"created": "Mon, 11 Jan 2021 15:09:03 GMT",
"version": "v2"
}
] |
2021-01-12
|
[
[
"Liu",
"Xiaofeng",
""
],
[
"Liu",
"Xiongchang",
""
],
[
"Hu",
"Bo",
""
],
[
"Ji",
"Wenxuan",
""
],
[
"Xing",
"Fangxu",
""
],
[
"Lu",
"Jun",
""
],
[
"You",
"Jane",
""
],
[
"Kuo",
"C. -C. Jay",
""
],
[
"Fakhri",
"Georges El",
""
],
[
"Woo",
"Jonghye",
""
]
] |
Recent advances in unsupervised domain adaptation (UDA) show that transferable prototypical learning presents a powerful means for class conditional alignment, which encourages the closeness of cross-domain class centroids. However, the cross-domain inner-class compactness and the underlying fine-grained subtype structure remained largely underexplored. In this work, we propose to adaptively carry out the fine-grained subtype-aware alignment by explicitly enforcing the class-wise separation and subtype-wise compactness with intermediate pseudo labels. Our key insight is that the unlabeled subtypes of a class can be divergent to one another with different conditional and label shifts, while inheriting the local proximity within a subtype. The cases of with or without the prior information on subtype numbers are investigated to discover the underlying subtype structure in an online fashion. The proposed subtype-aware dynamic UDA achieves promising results on medical diagnosis tasks.
|
1104.4668
|
Massimiliano Vasile Massimiliano Vasile
|
Matteo Ceriotti and Massimiliano Vasile
|
MGA trajectory planning with an ACO-inspired algorithm
| null |
Acta Astronautica, 67 (9-10). pp. 1202-1217, ISSN 0094-5765, 2010
| null | null |
cs.CE cs.NE cs.SY math.OC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Given a set of celestial bodies, the problem of finding an optimal sequence
of swing-bys, deep space manoeuvres (DSM) and transfer arcs connecting the
elements of the set is combinatorial in nature. The number of possible paths
grows exponentially with the number of celestial bodies. Therefore, the design
of an optimal multiple gravity assist (MGA) trajectory is a NP-hard mixed
combinatorial-continuous problem. Its automated solution would greatly improve
the design of future space missions, allowing the assessment of a large number
of alternative mission options in a short time. This work proposes to formulate
the complete automated design of a multiple gravity assist trajectory as an
autonomous planning and scheduling problem. The resulting scheduled plan will
provide the optimal planetary sequence and a good estimation of the set of
associated optimal trajectories. The trajectory model consists of a sequence of
celestial bodies connected by twodimensional transfer arcs containing one DSM.
For each transfer arc, the position of the planet and the spacecraft, at the
time of arrival, are matched by varying the pericentre of the preceding
swing-by, or the magnitude of the launch excess velocity, for the first arc.
For each departure date, this model generates a full tree of possible transfers
from the departure to the destination planet. Each leaf of the tree represents
a planetary encounter and a possible way to reach that planet. An algorithm
inspired by Ant Colony Optimization (ACO) is devised to explore the space of
possible plans. The ants explore the tree from departure to destination adding
one node at the time: every time an ant is at a node, a probability function is
used to select a feasible direction. This approach to automatic trajectory
planning is applied to the design of optimal transfers to Saturn and among the
Galilean moons of Jupiter.
|
[
{
"created": "Mon, 25 Apr 2011 00:58:35 GMT",
"version": "v1"
}
] |
2011-04-26
|
[
[
"Ceriotti",
"Matteo",
""
],
[
"Vasile",
"Massimiliano",
""
]
] |
Given a set of celestial bodies, the problem of finding an optimal sequence of swing-bys, deep space manoeuvres (DSM) and transfer arcs connecting the elements of the set is combinatorial in nature. The number of possible paths grows exponentially with the number of celestial bodies. Therefore, the design of an optimal multiple gravity assist (MGA) trajectory is a NP-hard mixed combinatorial-continuous problem. Its automated solution would greatly improve the design of future space missions, allowing the assessment of a large number of alternative mission options in a short time. This work proposes to formulate the complete automated design of a multiple gravity assist trajectory as an autonomous planning and scheduling problem. The resulting scheduled plan will provide the optimal planetary sequence and a good estimation of the set of associated optimal trajectories. The trajectory model consists of a sequence of celestial bodies connected by twodimensional transfer arcs containing one DSM. For each transfer arc, the position of the planet and the spacecraft, at the time of arrival, are matched by varying the pericentre of the preceding swing-by, or the magnitude of the launch excess velocity, for the first arc. For each departure date, this model generates a full tree of possible transfers from the departure to the destination planet. Each leaf of the tree represents a planetary encounter and a possible way to reach that planet. An algorithm inspired by Ant Colony Optimization (ACO) is devised to explore the space of possible plans. The ants explore the tree from departure to destination adding one node at the time: every time an ant is at a node, a probability function is used to select a feasible direction. This approach to automatic trajectory planning is applied to the design of optimal transfers to Saturn and among the Galilean moons of Jupiter.
|
2310.02264
|
Mingyu Ding
|
Haoyu Zhou, Mingyu Ding, Weikun Peng, Masayoshi Tomizuka, Lin Shao,
Chuang Gan
|
Generalizable Long-Horizon Manipulations with Large Language Models
| null | null | null | null |
cs.RO cs.CL cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This work introduces a framework harnessing the capabilities of Large
Language Models (LLMs) to generate primitive task conditions for generalizable
long-horizon manipulations with novel objects and unseen tasks. These task
conditions serve as guides for the generation and adjustment of Dynamic
Movement Primitives (DMP) trajectories for long-horizon task execution. We
further create a challenging robotic manipulation task suite based on Pybullet
for long-horizon task evaluation. Extensive experiments in both simulated and
real-world environments demonstrate the effectiveness of our framework on both
familiar tasks involving new objects and novel but related tasks, highlighting
the potential of LLMs in enhancing robotic system versatility and adaptability.
Project website: https://object814.github.io/Task-Condition-With-LLM/
|
[
{
"created": "Tue, 3 Oct 2023 17:59:46 GMT",
"version": "v1"
}
] |
2023-10-04
|
[
[
"Zhou",
"Haoyu",
""
],
[
"Ding",
"Mingyu",
""
],
[
"Peng",
"Weikun",
""
],
[
"Tomizuka",
"Masayoshi",
""
],
[
"Shao",
"Lin",
""
],
[
"Gan",
"Chuang",
""
]
] |
This work introduces a framework harnessing the capabilities of Large Language Models (LLMs) to generate primitive task conditions for generalizable long-horizon manipulations with novel objects and unseen tasks. These task conditions serve as guides for the generation and adjustment of Dynamic Movement Primitives (DMP) trajectories for long-horizon task execution. We further create a challenging robotic manipulation task suite based on Pybullet for long-horizon task evaluation. Extensive experiments in both simulated and real-world environments demonstrate the effectiveness of our framework on both familiar tasks involving new objects and novel but related tasks, highlighting the potential of LLMs in enhancing robotic system versatility and adaptability. Project website: https://object814.github.io/Task-Condition-With-LLM/
|
1706.06239
|
Hao Wang
|
Hao Wang, Yanmei Fu, Qinyong Wang, Hongzhi Yin, Changying Du, Hui
Xiong
|
A Location-Sentiment-Aware Recommender System for Both Home-Town and
Out-of-Town Users
|
Accepted by KDD 2017
| null | null | null |
cs.SI cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Spatial item recommendation has become an important means to help people
discover interesting locations, especially when people pay a visit to
unfamiliar regions. Some current researches are focusing on modelling
individual and collective geographical preferences for spatial item
recommendation based on users' check-in records, but they fail to explore the
phenomenon of user interest drift across geographical regions, i.e., users
would show different interests when they travel to different regions. Besides,
they ignore the influence of public comments for subsequent users' check-in
behaviors. Specifically, it is intuitive that users would refuse to check in to
a spatial item whose historical reviews seem negative overall, even though it
might fit their interests. Therefore, it is necessary to recommend the right
item to the right user at the right location. In this paper, we propose a
latent probabilistic generative model called LSARS to mimic the decision-making
process of users' check-in activities both in home-town and out-of-town
scenarios by adapting to user interest drift and crowd sentiments, which can
learn location-aware and sentiment-aware individual interests from the contents
of spatial items and user reviews. Due to the sparsity of user activities in
out-of-town regions, LSARS is further designed to incorporate the public
preferences learned from local users' check-in behaviors. Finally, we deploy
LSARS into two practical application scenes: spatial item recommendation and
target user discovery. Extensive experiments on two large-scale location-based
social networks (LBSNs) datasets show that LSARS achieves better performance
than existing state-of-the-art methods.
|
[
{
"created": "Tue, 20 Jun 2017 01:54:01 GMT",
"version": "v1"
}
] |
2017-06-21
|
[
[
"Wang",
"Hao",
""
],
[
"Fu",
"Yanmei",
""
],
[
"Wang",
"Qinyong",
""
],
[
"Yin",
"Hongzhi",
""
],
[
"Du",
"Changying",
""
],
[
"Xiong",
"Hui",
""
]
] |
Spatial item recommendation has become an important means to help people discover interesting locations, especially when people pay a visit to unfamiliar regions. Some current researches are focusing on modelling individual and collective geographical preferences for spatial item recommendation based on users' check-in records, but they fail to explore the phenomenon of user interest drift across geographical regions, i.e., users would show different interests when they travel to different regions. Besides, they ignore the influence of public comments for subsequent users' check-in behaviors. Specifically, it is intuitive that users would refuse to check in to a spatial item whose historical reviews seem negative overall, even though it might fit their interests. Therefore, it is necessary to recommend the right item to the right user at the right location. In this paper, we propose a latent probabilistic generative model called LSARS to mimic the decision-making process of users' check-in activities both in home-town and out-of-town scenarios by adapting to user interest drift and crowd sentiments, which can learn location-aware and sentiment-aware individual interests from the contents of spatial items and user reviews. Due to the sparsity of user activities in out-of-town regions, LSARS is further designed to incorporate the public preferences learned from local users' check-in behaviors. Finally, we deploy LSARS into two practical application scenes: spatial item recommendation and target user discovery. Extensive experiments on two large-scale location-based social networks (LBSNs) datasets show that LSARS achieves better performance than existing state-of-the-art methods.
|
1305.3354
|
Sandip Chakraborty
|
Sandip Chakraborty, Soumyadip Majumder, Diganta Goswami
|
Approximate Congestion Games for Load Balancing in Distributed
Environment
|
A version of this work has been presented at International Workshop
on Distributed System (IWDS) 2010, IIT Kanpur, India, as a "work-in-progress"
report
| null | null | null |
cs.NI cs.DC cs.GT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The use of game theoretic models has been quite successful in describing
various cooperative and non-cooperative optimization problems in networks and
other domains of computer systems. In this paper, we study an application of
game theoretic models in the domain of distributed system, where nodes play a
game to balance the total processing loads among themselves. We have used
congestion gaming model, a model of game theory where many agents compete for
allocating resources, and studied the existence of Nash Equilibrium for such
types of games. As the classical congestion game is known to be PLS-Complete,
we use an approximation, called the \epsilon-Congestion game, which converges
to \epsilon-Nash equilibrium within finite number of steps under selected
conditions. Our focus is to define the load balancing problem using the model
of \epsilon-congestion games, and finally provide a greedy algorithm for load
balancing in distributed systems. We have simulated our proposed system to show
the effect of \epsilon-congestion game, and the distribution of load at
equilibrium state.
|
[
{
"created": "Wed, 15 May 2013 05:06:02 GMT",
"version": "v1"
}
] |
2013-05-16
|
[
[
"Chakraborty",
"Sandip",
""
],
[
"Majumder",
"Soumyadip",
""
],
[
"Goswami",
"Diganta",
""
]
] |
The use of game theoretic models has been quite successful in describing various cooperative and non-cooperative optimization problems in networks and other domains of computer systems. In this paper, we study an application of game theoretic models in the domain of distributed system, where nodes play a game to balance the total processing loads among themselves. We have used congestion gaming model, a model of game theory where many agents compete for allocating resources, and studied the existence of Nash Equilibrium for such types of games. As the classical congestion game is known to be PLS-Complete, we use an approximation, called the \epsilon-Congestion game, which converges to \epsilon-Nash equilibrium within finite number of steps under selected conditions. Our focus is to define the load balancing problem using the model of \epsilon-congestion games, and finally provide a greedy algorithm for load balancing in distributed systems. We have simulated our proposed system to show the effect of \epsilon-congestion game, and the distribution of load at equilibrium state.
|
2309.13438
|
Tingyu Zhao
|
Tingyu Zhao, Bo Peng, Yuan Sun, Daipeng Yang, Zhenguang Zhang, and Xi
Wu
|
Rethinking Superpixel Segmentation from Biologically Inspired Mechanisms
| null | null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently, advancements in deep learning-based superpixel segmentation methods
have brought about improvements in both the efficiency and the performance of
segmentation. However, a significant challenge remains in generating
superpixels that strictly adhere to object boundaries while conveying rich
visual significance, especially when cross-surface color correlations may
interfere with objects. Drawing inspiration from neural structure and visual
mechanisms, we propose a biological network architecture comprising an Enhanced
Screening Module (ESM) and a novel Boundary-Aware Label (BAL) for superpixel
segmentation. The ESM enhances semantic information by simulating the
interactive projection mechanisms of the visual cortex. Additionally, the BAL
emulates the spatial frequency characteristics of visual cortical cells to
facilitate the generation of superpixels with strong boundary adherence. We
demonstrate the effectiveness of our approach through evaluations on both the
BSDS500 dataset and the NYUv2 dataset.
|
[
{
"created": "Sat, 23 Sep 2023 17:29:38 GMT",
"version": "v1"
},
{
"created": "Wed, 4 Oct 2023 12:13:53 GMT",
"version": "v2"
},
{
"created": "Wed, 11 Oct 2023 06:43:08 GMT",
"version": "v3"
}
] |
2023-10-12
|
[
[
"Zhao",
"Tingyu",
""
],
[
"Peng",
"Bo",
""
],
[
"Sun",
"Yuan",
""
],
[
"Yang",
"Daipeng",
""
],
[
"Zhang",
"Zhenguang",
""
],
[
"Wu",
"Xi",
""
]
] |
Recently, advancements in deep learning-based superpixel segmentation methods have brought about improvements in both the efficiency and the performance of segmentation. However, a significant challenge remains in generating superpixels that strictly adhere to object boundaries while conveying rich visual significance, especially when cross-surface color correlations may interfere with objects. Drawing inspiration from neural structure and visual mechanisms, we propose a biological network architecture comprising an Enhanced Screening Module (ESM) and a novel Boundary-Aware Label (BAL) for superpixel segmentation. The ESM enhances semantic information by simulating the interactive projection mechanisms of the visual cortex. Additionally, the BAL emulates the spatial frequency characteristics of visual cortical cells to facilitate the generation of superpixels with strong boundary adherence. We demonstrate the effectiveness of our approach through evaluations on both the BSDS500 dataset and the NYUv2 dataset.
|
2211.17059
|
Yiyang Liu
|
Yiyang Liu, Chenxin Li, Xiaotong Tu, Xinghao Ding, Yue Huang
|
Hint-dynamic Knowledge Distillation
|
5 pages
| null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Knowledge Distillation (KD) transfers the knowledge from a high-capacity
teacher model to promote a smaller student model. Existing efforts guide the
distillation by matching their prediction logits, feature embedding, etc.,
while leaving how to efficiently utilize them in junction less explored. In
this paper, we propose Hint-dynamic Knowledge Distillation, dubbed HKD, which
excavates the knowledge from the teacher' s hints in a dynamic scheme. The
guidance effect from the knowledge hints usually varies in different instances
and learning stages, which motivates us to customize a specific hint-learning
manner for each instance adaptively. Specifically, a meta-weight network is
introduced to generate the instance-wise weight coefficients about knowledge
hints in the perception of the dynamical learning progress of the student
model. We further present a weight ensembling strategy to eliminate the
potential bias of coefficient estimation by exploiting the historical statics.
Experiments on standard benchmarks of CIFAR-100 and Tiny-ImageNet manifest that
the proposed HKD well boost the effect of knowledge distillation tasks.
|
[
{
"created": "Wed, 30 Nov 2022 15:03:53 GMT",
"version": "v1"
}
] |
2022-12-01
|
[
[
"Liu",
"Yiyang",
""
],
[
"Li",
"Chenxin",
""
],
[
"Tu",
"Xiaotong",
""
],
[
"Ding",
"Xinghao",
""
],
[
"Huang",
"Yue",
""
]
] |
Knowledge Distillation (KD) transfers the knowledge from a high-capacity teacher model to promote a smaller student model. Existing efforts guide the distillation by matching their prediction logits, feature embedding, etc., while leaving how to efficiently utilize them in junction less explored. In this paper, we propose Hint-dynamic Knowledge Distillation, dubbed HKD, which excavates the knowledge from the teacher' s hints in a dynamic scheme. The guidance effect from the knowledge hints usually varies in different instances and learning stages, which motivates us to customize a specific hint-learning manner for each instance adaptively. Specifically, a meta-weight network is introduced to generate the instance-wise weight coefficients about knowledge hints in the perception of the dynamical learning progress of the student model. We further present a weight ensembling strategy to eliminate the potential bias of coefficient estimation by exploiting the historical statics. Experiments on standard benchmarks of CIFAR-100 and Tiny-ImageNet manifest that the proposed HKD well boost the effect of knowledge distillation tasks.
|
1301.4478
|
Chaitanya Swamy
|
Sara Ahmadian, Zachary Friggstad, and Chaitanya Swamy
|
Local-Search based Approximation Algorithms for Mobile Facility Location
Problems
| null | null | null | null |
cs.DS cs.DM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We consider the {\em mobile facility location} (\mfl) problem. We are given a
set of facilities and clients located in a common metric space. The goal is to
move each facility from its initial location to a destination and assign each
client to the destination of some facility so as to minimize the sum of the
movement-costs of the facilities and the client-assignment costs. This
abstracts facility-location settings where one has the flexibility of moving
facilities from their current locations to other destinations so as to serve
clients more efficiently by reducing their assignment costs.
We give the first {\em local-search based} approximation algorithm for this
problem and achieve the best-known approximation guarantee. Our main result is
$(3+\epsilon)$-approximation for this problem for any constant $\epsilon>0$
using local search. The previous best guarantee was an 8-approximation
algorithm based on LP-rounding. Our guarantee {\em matches} the best-known
approximation guarantee for the $k$-median problem. Since there is an
approximation-preserving reduction from the $k$-median problem to \mfl, any
improvement of our result would imply an analogous improvement for the
$k$-median problem. Furthermore, {\em our analysis is tight} (up to $o(1)$
factors) since the tight example for the local-search based 3-approximation
algorithm for $k$-median can be easily adapted to show that our local-search
algorithm has a tight approximation ratio of 3. One of the chief novelties of
the analysis is that in order to generate a suitable collection of local-search
moves whose resulting inequalities yield the desired bound on the cost of a
local-optimum, we define a tree-like structure that (loosely speaking)
functions as a "recursion tree", using which we spawn off local-search moves by
exploring this tree to a constant depth.
|
[
{
"created": "Fri, 18 Jan 2013 20:05:12 GMT",
"version": "v1"
}
] |
2013-01-21
|
[
[
"Ahmadian",
"Sara",
""
],
[
"Friggstad",
"Zachary",
""
],
[
"Swamy",
"Chaitanya",
""
]
] |
We consider the {\em mobile facility location} (\mfl) problem. We are given a set of facilities and clients located in a common metric space. The goal is to move each facility from its initial location to a destination and assign each client to the destination of some facility so as to minimize the sum of the movement-costs of the facilities and the client-assignment costs. This abstracts facility-location settings where one has the flexibility of moving facilities from their current locations to other destinations so as to serve clients more efficiently by reducing their assignment costs. We give the first {\em local-search based} approximation algorithm for this problem and achieve the best-known approximation guarantee. Our main result is $(3+\epsilon)$-approximation for this problem for any constant $\epsilon>0$ using local search. The previous best guarantee was an 8-approximation algorithm based on LP-rounding. Our guarantee {\em matches} the best-known approximation guarantee for the $k$-median problem. Since there is an approximation-preserving reduction from the $k$-median problem to \mfl, any improvement of our result would imply an analogous improvement for the $k$-median problem. Furthermore, {\em our analysis is tight} (up to $o(1)$ factors) since the tight example for the local-search based 3-approximation algorithm for $k$-median can be easily adapted to show that our local-search algorithm has a tight approximation ratio of 3. One of the chief novelties of the analysis is that in order to generate a suitable collection of local-search moves whose resulting inequalities yield the desired bound on the cost of a local-optimum, we define a tree-like structure that (loosely speaking) functions as a "recursion tree", using which we spawn off local-search moves by exploring this tree to a constant depth.
|
2009.04656
|
Yian Li
|
Yian Li, Hai Zhao
|
Learning Universal Representations from Word to Sentence
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Despite the well-developed cut-edge representation learning for language,
most language representation models usually focus on specific level of
linguistic unit, which cause great inconvenience when being confronted with
handling multiple layers of linguistic objects in a unified way. Thus this work
introduces and explores the universal representation learning, i.e., embeddings
of different levels of linguistic unit in a uniform vector space through a
task-independent evaluation. We present our approach of constructing analogy
datasets in terms of words, phrases and sentences and experiment with multiple
representation models to examine geometric properties of the learned vector
space. Then we empirically verify that well pre-trained Transformer models
incorporated with appropriate training settings may effectively yield universal
representation. Especially, our implementation of fine-tuning ALBERT on NLI and
PPDB datasets achieves the highest accuracy on analogy tasks in different
language levels. Further experiments on the insurance FAQ task show
effectiveness of universal representation models in real-world applications.
|
[
{
"created": "Thu, 10 Sep 2020 03:53:18 GMT",
"version": "v1"
}
] |
2020-09-11
|
[
[
"Li",
"Yian",
""
],
[
"Zhao",
"Hai",
""
]
] |
Despite the well-developed cut-edge representation learning for language, most language representation models usually focus on specific level of linguistic unit, which cause great inconvenience when being confronted with handling multiple layers of linguistic objects in a unified way. Thus this work introduces and explores the universal representation learning, i.e., embeddings of different levels of linguistic unit in a uniform vector space through a task-independent evaluation. We present our approach of constructing analogy datasets in terms of words, phrases and sentences and experiment with multiple representation models to examine geometric properties of the learned vector space. Then we empirically verify that well pre-trained Transformer models incorporated with appropriate training settings may effectively yield universal representation. Especially, our implementation of fine-tuning ALBERT on NLI and PPDB datasets achieves the highest accuracy on analogy tasks in different language levels. Further experiments on the insurance FAQ task show effectiveness of universal representation models in real-world applications.
|
2303.07831
|
Yu Zhou
|
Yu Zhou, Liyuan Guo, Lianghai Jin
|
Quaternion Orthogonal Transformer for Facial Expression Recognition in
the Wild
|
This paper has been accepted to ICASSP2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Facial expression recognition (FER) is a challenging topic in artificial
intelligence. Recently, many researchers have attempted to introduce Vision
Transformer (ViT) to the FER task. However, ViT cannot fully utilize emotional
features extracted from raw images and requires a lot of computing resources.
To overcome these problems, we propose a quaternion orthogonal transformer
(QOT) for FER. Firstly, to reduce redundancy among features extracted from
pre-trained ResNet-50, we use the orthogonal loss to decompose and compact
these features into three sets of orthogonal sub-features. Secondly, three
orthogonal sub-features are integrated into a quaternion matrix, which
maintains the correlations between different orthogonal components. Finally, we
develop a quaternion vision transformer (Q-ViT) for feature classification. The
Q-ViT adopts quaternion operations instead of the original operations in ViT,
which improves the final accuracies with fewer parameters. Experimental results
on three in-the-wild FER datasets show that the proposed QOT outperforms
several state-of-the-art models and reduces the computations.
|
[
{
"created": "Tue, 14 Mar 2023 12:07:48 GMT",
"version": "v1"
}
] |
2023-03-15
|
[
[
"Zhou",
"Yu",
""
],
[
"Guo",
"Liyuan",
""
],
[
"Jin",
"Lianghai",
""
]
] |
Facial expression recognition (FER) is a challenging topic in artificial intelligence. Recently, many researchers have attempted to introduce Vision Transformer (ViT) to the FER task. However, ViT cannot fully utilize emotional features extracted from raw images and requires a lot of computing resources. To overcome these problems, we propose a quaternion orthogonal transformer (QOT) for FER. Firstly, to reduce redundancy among features extracted from pre-trained ResNet-50, we use the orthogonal loss to decompose and compact these features into three sets of orthogonal sub-features. Secondly, three orthogonal sub-features are integrated into a quaternion matrix, which maintains the correlations between different orthogonal components. Finally, we develop a quaternion vision transformer (Q-ViT) for feature classification. The Q-ViT adopts quaternion operations instead of the original operations in ViT, which improves the final accuracies with fewer parameters. Experimental results on three in-the-wild FER datasets show that the proposed QOT outperforms several state-of-the-art models and reduces the computations.
|
2001.08328
|
Yuwei Tu
|
Yuwei Tu, Weiyu Chen, Christopher G. Brinton
|
A Deep Learning Approach to Behavior-Based Learner Modeling
| null | null | null | null |
cs.LG cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The increasing popularity of e-learning has created demand for improving
online education through techniques such as predictive analytics and content
recommendations. In this paper, we study learner outcome predictions, i.e.,
predictions of how they will perform at the end of a course. We propose a novel
Two Branch Decision Network for performance prediction that incorporates two
important factors: how learners progress through the course and how the content
progresses through the course. We combine clickstream features which log every
action the learner takes while learning, and textual features which are
generated through pre-trained GloVe word embeddings. To assess the performance
of our proposed network, we collect data from a short online course designed
for corporate training and evaluate both neural network and non-neural network
based algorithms on it. Our proposed algorithm achieves 95.7% accuracy and
0.958 AUC score, which outperforms all other models. The results also indicate
the combination of behavior features and text features are more predictive than
behavior features only and neural network models are powerful in capturing the
joint relationship between user behavior and course content.
|
[
{
"created": "Thu, 23 Jan 2020 01:26:52 GMT",
"version": "v1"
}
] |
2020-01-24
|
[
[
"Tu",
"Yuwei",
""
],
[
"Chen",
"Weiyu",
""
],
[
"Brinton",
"Christopher G.",
""
]
] |
The increasing popularity of e-learning has created demand for improving online education through techniques such as predictive analytics and content recommendations. In this paper, we study learner outcome predictions, i.e., predictions of how they will perform at the end of a course. We propose a novel Two Branch Decision Network for performance prediction that incorporates two important factors: how learners progress through the course and how the content progresses through the course. We combine clickstream features which log every action the learner takes while learning, and textual features which are generated through pre-trained GloVe word embeddings. To assess the performance of our proposed network, we collect data from a short online course designed for corporate training and evaluate both neural network and non-neural network based algorithms on it. Our proposed algorithm achieves 95.7% accuracy and 0.958 AUC score, which outperforms all other models. The results also indicate the combination of behavior features and text features are more predictive than behavior features only and neural network models are powerful in capturing the joint relationship between user behavior and course content.
|
1809.07912
|
Meng Shen
|
Meng Shen, Baoli Ma, Liehuang Zhu, Rashid Mijumbi, Xiaojiang Du, and
Jiankun Hu
|
Cloud-Based Approximate Constrained Shortest Distance Queries Over
Encrypted Graphs With Privacy Protection
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Constrained shortest distance (CSD) querying is one of the fundamental graph
query primitives, which finds the shortest distance from an origin to a
destination in a graph with a constraint that the total cost does not exceed a
given threshold. CSD querying has a wide range of applications, such as routing
in telecommunications and transportation. With an increasing prevalence of
cloud computing paradigm, graph owners desire to outsource their graphs to
cloud servers. In order to protect sensitive information, these graphs are
usually encrypted before being outsourced to the cloud. This, however, imposes
a great challenge to CSD querying over encrypted graphs. Since performing
constraint filtering is an intractable task, existing work mainly focuses on
unconstrained shortest distance queries. CSD querying over encrypted graphs
remains an open research problem. In this paper, we propose Connor, a novel
graph encryption scheme that enables approximate CSD querying. Connor is built
based on an efficient, tree-based ciphertext comparison protocol, and makes use
of symmetric-key primitives and the somewhat homomorphic encryption, making it
computationally efficient. Using Connor, a graph owner can first encrypt
privacy-sensitive graphs and then outsource them to the cloud server, achieving
the necessary privacy without losing the ability of querying. Extensive
experiments with real-world datasets demonstrate the effectiveness and
efficiency of the proposed graph encryption scheme.
|
[
{
"created": "Fri, 21 Sep 2018 01:58:00 GMT",
"version": "v1"
}
] |
2018-09-24
|
[
[
"Shen",
"Meng",
""
],
[
"Ma",
"Baoli",
""
],
[
"Zhu",
"Liehuang",
""
],
[
"Mijumbi",
"Rashid",
""
],
[
"Du",
"Xiaojiang",
""
],
[
"Hu",
"Jiankun",
""
]
] |
Constrained shortest distance (CSD) querying is one of the fundamental graph query primitives, which finds the shortest distance from an origin to a destination in a graph with a constraint that the total cost does not exceed a given threshold. CSD querying has a wide range of applications, such as routing in telecommunications and transportation. With an increasing prevalence of cloud computing paradigm, graph owners desire to outsource their graphs to cloud servers. In order to protect sensitive information, these graphs are usually encrypted before being outsourced to the cloud. This, however, imposes a great challenge to CSD querying over encrypted graphs. Since performing constraint filtering is an intractable task, existing work mainly focuses on unconstrained shortest distance queries. CSD querying over encrypted graphs remains an open research problem. In this paper, we propose Connor, a novel graph encryption scheme that enables approximate CSD querying. Connor is built based on an efficient, tree-based ciphertext comparison protocol, and makes use of symmetric-key primitives and the somewhat homomorphic encryption, making it computationally efficient. Using Connor, a graph owner can first encrypt privacy-sensitive graphs and then outsource them to the cloud server, achieving the necessary privacy without losing the ability of querying. Extensive experiments with real-world datasets demonstrate the effectiveness and efficiency of the proposed graph encryption scheme.
|
2403.07238
|
Mostafa Jamshidian
|
Farah Alkhatib, Mostafa Jamshidian, Donatien Le Liepvre, Florian
Bernard, Ludovic Minvielle, Adam Wittek, Karol Miller
|
Towards Full Automation of Geometry Extraction for Biomechanical
Analysis of Abdominal Aortic Aneurysm; Neural Network-Based versus Classical
Methodologies
|
32 pages, 9 figures
| null | null | null |
cs.CE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this study we investigated the impact of image segmentation methods on the
results of stress computation in the wall of abdominal aortic aneurysms (AAAs).
We compared wall stress distributions and magnitudes calculated from geometry
models obtained from classical semi-automated segmentation versus automated
neural network-based segmentation. Ten different AAA contrast-enhanced computed
tomography (CT) images were semi-automatically segmented by an analyst, taking,
depending on the quality of an image, between 15 and 40 minutes of human effort
per patient. The same images were automatically segmented using PRAEVAorta 2,
commercial software by NUREA (https://www.nurea-soft.com/), developed based on
artificial intelligence (AI) algorithms, requiring only 1-2 minutes of computer
time per patient. Aneurysm wall stress calculations performed using the BioPARR
software (https://bioparr.mech.uwa.edu.au/) revealed that, compared to the
classical semi-automated segmentation, the automatic neural network-based
segmentation leads to equivalent stress distributions, and slightly higher peak
and 99th percentile maximum principal stress values. This difference is due to
consistently larger lumen surface areas in automatically segmented models as
compared to classical semi-automated segmentations, resulting in greater total
pressure load on the wall. Our findings are a steppingstone toward a fully
automated pipeline for biomechanical analysis of AAAs, starting with CT scans
and concluding with wall stress assessment, while at the same time highlighting
the critical importance of the repeatable and accurate segmentation of the
lumen, the difficult problem often underestimated by the literature.
|
[
{
"created": "Tue, 12 Mar 2024 01:20:34 GMT",
"version": "v1"
}
] |
2024-03-13
|
[
[
"Alkhatib",
"Farah",
""
],
[
"Jamshidian",
"Mostafa",
""
],
[
"Liepvre",
"Donatien Le",
""
],
[
"Bernard",
"Florian",
""
],
[
"Minvielle",
"Ludovic",
""
],
[
"Wittek",
"Adam",
""
],
[
"Miller",
"Karol",
""
]
] |
In this study we investigated the impact of image segmentation methods on the results of stress computation in the wall of abdominal aortic aneurysms (AAAs). We compared wall stress distributions and magnitudes calculated from geometry models obtained from classical semi-automated segmentation versus automated neural network-based segmentation. Ten different AAA contrast-enhanced computed tomography (CT) images were semi-automatically segmented by an analyst, taking, depending on the quality of an image, between 15 and 40 minutes of human effort per patient. The same images were automatically segmented using PRAEVAorta 2, commercial software by NUREA (https://www.nurea-soft.com/), developed based on artificial intelligence (AI) algorithms, requiring only 1-2 minutes of computer time per patient. Aneurysm wall stress calculations performed using the BioPARR software (https://bioparr.mech.uwa.edu.au/) revealed that, compared to the classical semi-automated segmentation, the automatic neural network-based segmentation leads to equivalent stress distributions, and slightly higher peak and 99th percentile maximum principal stress values. This difference is due to consistently larger lumen surface areas in automatically segmented models as compared to classical semi-automated segmentations, resulting in greater total pressure load on the wall. Our findings are a steppingstone toward a fully automated pipeline for biomechanical analysis of AAAs, starting with CT scans and concluding with wall stress assessment, while at the same time highlighting the critical importance of the repeatable and accurate segmentation of the lumen, the difficult problem often underestimated by the literature.
|
2406.17873
|
Zhongtao Miao
|
Zhongtao Miao, Kaiyan Zhao, Yoshimasa Tsuruoka
|
Improving Arithmetic Reasoning Ability of Large Language Models through
Relation Tuples, Verification and Dynamic Feedback
|
Under review, 25 figures, 8 tables, 29 pages
| null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Current representations used in reasoning steps of large language models can
mostly be categorized into two main types: (1) natural language, which is
difficult to verify; and (2) non-natural language, usually programming code,
which is difficult for people who are unfamiliar with coding to read. In this
paper, we propose to use a semi-structured form to represent reasoning steps of
large language models. Specifically, we use relation tuples, which are not only
human-readable but also machine-friendly and easier to verify than natural
language. We implement a framework that includes three main components: (1)
introducing relation tuples into the reasoning steps of large language models;
(2) implementing an automatic verification process of reasoning steps with a
local code interpreter based on relation tuples; and (3) integrating a simple
and effective dynamic feedback mechanism, which we found helpful for
self-improvement of large language models. The experimental results on various
arithmetic datasets demonstrate the effectiveness of our method in improving
the arithmetic reasoning ability of large language models. The source code is
available at https://github.com/gpgg/art.
|
[
{
"created": "Tue, 25 Jun 2024 18:21:00 GMT",
"version": "v1"
}
] |
2024-06-27
|
[
[
"Miao",
"Zhongtao",
""
],
[
"Zhao",
"Kaiyan",
""
],
[
"Tsuruoka",
"Yoshimasa",
""
]
] |
Current representations used in reasoning steps of large language models can mostly be categorized into two main types: (1) natural language, which is difficult to verify; and (2) non-natural language, usually programming code, which is difficult for people who are unfamiliar with coding to read. In this paper, we propose to use a semi-structured form to represent reasoning steps of large language models. Specifically, we use relation tuples, which are not only human-readable but also machine-friendly and easier to verify than natural language. We implement a framework that includes three main components: (1) introducing relation tuples into the reasoning steps of large language models; (2) implementing an automatic verification process of reasoning steps with a local code interpreter based on relation tuples; and (3) integrating a simple and effective dynamic feedback mechanism, which we found helpful for self-improvement of large language models. The experimental results on various arithmetic datasets demonstrate the effectiveness of our method in improving the arithmetic reasoning ability of large language models. The source code is available at https://github.com/gpgg/art.
|
2306.17411
|
Yanjiang Guo
|
Yanjiang Guo, Zheyuan Jiang, Yen-Jen Wang, Jingyue Gao, Jianyu Chen
|
Decentralized Motor Skill Learning for Complex Robotic Systems
|
8 pages, 7 figures
| null | null | null |
cs.RO cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Reinforcement learning (RL) has achieved remarkable success in complex
robotic systems (eg. quadruped locomotion). In previous works, the RL-based
controller was typically implemented as a single neural network with
concatenated observation input. However, the corresponding learned policy is
highly task-specific. Since all motors are controlled in a centralized way,
out-of-distribution local observations can impact global motors through the
single coupled neural network policy. In contrast, animals and humans can
control their limbs separately. Inspired by this biological phenomenon, we
propose a Decentralized motor skill (DEMOS) learning algorithm to automatically
discover motor groups that can be decoupled from each other while preserving
essential connections and then learn a decentralized motor control policy. Our
method improves the robustness and generalization of the policy without
sacrificing performance. Experiments on quadruped and humanoid robots
demonstrate that the learned policy is robust against local motor malfunctions
and can be transferred to new tasks.
|
[
{
"created": "Fri, 30 Jun 2023 05:55:34 GMT",
"version": "v1"
}
] |
2023-07-03
|
[
[
"Guo",
"Yanjiang",
""
],
[
"Jiang",
"Zheyuan",
""
],
[
"Wang",
"Yen-Jen",
""
],
[
"Gao",
"Jingyue",
""
],
[
"Chen",
"Jianyu",
""
]
] |
Reinforcement learning (RL) has achieved remarkable success in complex robotic systems (eg. quadruped locomotion). In previous works, the RL-based controller was typically implemented as a single neural network with concatenated observation input. However, the corresponding learned policy is highly task-specific. Since all motors are controlled in a centralized way, out-of-distribution local observations can impact global motors through the single coupled neural network policy. In contrast, animals and humans can control their limbs separately. Inspired by this biological phenomenon, we propose a Decentralized motor skill (DEMOS) learning algorithm to automatically discover motor groups that can be decoupled from each other while preserving essential connections and then learn a decentralized motor control policy. Our method improves the robustness and generalization of the policy without sacrificing performance. Experiments on quadruped and humanoid robots demonstrate that the learned policy is robust against local motor malfunctions and can be transferred to new tasks.
|
2210.02287
|
Luyuan Xie
|
Luyuan Xie, Yan Zhong, Lin Yang, Zhaoyu Yan, Zhonghai Wu, Junjie Wang
|
TC-SKNet with GridMask for Low-complexity Classification of Acoustic
scene
|
Accepted to APSIPA ASC 2022
| null | null | null |
cs.SD cs.LG eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Convolution neural networks (CNNs) have good performance in low-complexity
classification tasks such as acoustic scene classifications (ASCs). However,
there are few studies on the relationship between the length of target speech
and the size of the convolution kernels. In this paper, we combine Selective
Kernel Network with Temporal-Convolution (TC-SKNet) to adjust the receptive
field of convolution kernels to solve the problem of variable length of target
voice while keeping low-complexity. GridMask is a data augmentation strategy by
masking part of the raw data or feature area. It can enhance the generalization
of the model as the role of dropout. In our experiments, the performance gain
brought by GridMask is stronger than spectrum augmentation in ASCs. Finally, we
adopt AutoML to search best structure of TC-SKNet and hyperparameters of
GridMask for improving the classification performance. As a result, a peak
accuracy of 59.87% TC-SKNet is equivalent to that of SOTA, but the parameters
only use 20.9 K.
|
[
{
"created": "Wed, 5 Oct 2022 14:24:17 GMT",
"version": "v1"
}
] |
2022-10-06
|
[
[
"Xie",
"Luyuan",
""
],
[
"Zhong",
"Yan",
""
],
[
"Yang",
"Lin",
""
],
[
"Yan",
"Zhaoyu",
""
],
[
"Wu",
"Zhonghai",
""
],
[
"Wang",
"Junjie",
""
]
] |
Convolution neural networks (CNNs) have good performance in low-complexity classification tasks such as acoustic scene classifications (ASCs). However, there are few studies on the relationship between the length of target speech and the size of the convolution kernels. In this paper, we combine Selective Kernel Network with Temporal-Convolution (TC-SKNet) to adjust the receptive field of convolution kernels to solve the problem of variable length of target voice while keeping low-complexity. GridMask is a data augmentation strategy by masking part of the raw data or feature area. It can enhance the generalization of the model as the role of dropout. In our experiments, the performance gain brought by GridMask is stronger than spectrum augmentation in ASCs. Finally, we adopt AutoML to search best structure of TC-SKNet and hyperparameters of GridMask for improving the classification performance. As a result, a peak accuracy of 59.87% TC-SKNet is equivalent to that of SOTA, but the parameters only use 20.9 K.
|
cs/0208044
|
Stephen A. Fenner
|
Stephen A. Fenner
|
Gales and supergales are equivalent for defining constructive Hausdorff
dimension
|
7 pages, no figures
| null | null | null |
cs.CC
| null |
We show that for a wide range of probability measures, constructive gales are
interchangable with constructive supergales for defining constructive Hausdorff
dimension, thus generalizing a previous independent result of Hitchcock
(cs.CC/0208043) and partially answering an open question of Lutz
(cs.CC/0203017).
|
[
{
"created": "Thu, 29 Aug 2002 21:25:47 GMT",
"version": "v1"
}
] |
2007-05-23
|
[
[
"Fenner",
"Stephen A.",
""
]
] |
We show that for a wide range of probability measures, constructive gales are interchangable with constructive supergales for defining constructive Hausdorff dimension, thus generalizing a previous independent result of Hitchcock (cs.CC/0208043) and partially answering an open question of Lutz (cs.CC/0203017).
|
1803.06904
|
SeyedMajid Azimi
|
Seyed Majid Azimi, Peter Fischer, Marco K\"orner, Peter Reinartz
|
Aerial LaneNet: Lane Marking Semantic Segmentation in Aerial Imagery
using Wavelet-Enhanced Cost-sensitive Symmetric Fully Convolutional Neural
Networks
|
IEEE TGRS 2018 - Accepted
| null |
10.1109/TGRS.2018.2878510
| null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The knowledge about the placement and appearance of lane markings is a
prerequisite for the creation of maps with high precision, necessary for
autonomous driving, infrastructure monitoring, lane-wise traffic management,
and urban planning. Lane markings are one of the important components of such
maps. Lane markings convey the rules of roads to drivers. While these rules are
learned by humans, an autonomous driving vehicle should be taught to learn them
to localize itself. Therefore, accurate and reliable lane marking semantic
segmentation in the imagery of roads and highways is needed to achieve such
goals. We use airborne imagery which can capture a large area in a short period
of time by introducing an aerial lane marking dataset. In this work, we propose
a Symmetric Fully Convolutional Neural Network enhanced by Wavelet Transform in
order to automatically carry out lane marking segmentation in aerial imagery.
Due to a heavily unbalanced problem in terms of number of lane marking pixels
compared with background pixels, we use a customized loss function as well as a
new type of data augmentation step. We achieve a very high accuracy in
pixel-wise localization of lane markings without using 3rd-party information.
In this work, we introduce the first high-quality dataset used within our
experiments which contains a broad range of situations and classes of lane
markings representative of current transportation systems. This dataset will be
publicly available and hence, it can be used as the benchmark dataset for
future algorithms within this domain.
|
[
{
"created": "Mon, 19 Mar 2018 13:32:27 GMT",
"version": "v1"
},
{
"created": "Thu, 1 Nov 2018 16:16:51 GMT",
"version": "v2"
}
] |
2019-08-26
|
[
[
"Azimi",
"Seyed Majid",
""
],
[
"Fischer",
"Peter",
""
],
[
"Körner",
"Marco",
""
],
[
"Reinartz",
"Peter",
""
]
] |
The knowledge about the placement and appearance of lane markings is a prerequisite for the creation of maps with high precision, necessary for autonomous driving, infrastructure monitoring, lane-wise traffic management, and urban planning. Lane markings are one of the important components of such maps. Lane markings convey the rules of roads to drivers. While these rules are learned by humans, an autonomous driving vehicle should be taught to learn them to localize itself. Therefore, accurate and reliable lane marking semantic segmentation in the imagery of roads and highways is needed to achieve such goals. We use airborne imagery which can capture a large area in a short period of time by introducing an aerial lane marking dataset. In this work, we propose a Symmetric Fully Convolutional Neural Network enhanced by Wavelet Transform in order to automatically carry out lane marking segmentation in aerial imagery. Due to a heavily unbalanced problem in terms of number of lane marking pixels compared with background pixels, we use a customized loss function as well as a new type of data augmentation step. We achieve a very high accuracy in pixel-wise localization of lane markings without using 3rd-party information. In this work, we introduce the first high-quality dataset used within our experiments which contains a broad range of situations and classes of lane markings representative of current transportation systems. This dataset will be publicly available and hence, it can be used as the benchmark dataset for future algorithms within this domain.
|
2405.00287
|
Jeongwhan Choi
|
Chaejeong Lee, Jeongwhan Choi, Hyowon Wi, Sung-Bae Cho, Noseong Park
|
Stochastic Sampling for Contrastive Views and Hard Negative Samples in
Graph-based Collaborative Filtering
| null | null | null | null |
cs.IR cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Graph-based collaborative filtering (CF) has emerged as a promising approach
in recommendation systems. Despite its achievements, graph-based CF models face
challenges due to data sparsity and negative sampling. In this paper, we
propose a novel Stochastic sampling for i) COntrastive views and ii) hard
NEgative samples (SCONE) to overcome these issues. By considering that they are
both sampling tasks, we generate dynamic augmented views and diverse hard
negative samples via our unified stochastic sampling framework based on
score-based generative models. In our comprehensive evaluations with 6
benchmark datasets, our proposed SCONE significantly improves recommendation
accuracy and robustness, and demonstrates the superiority of our approach over
existing CF models. Furthermore, we prove the efficacy of user-item specific
stochastic sampling for addressing the user sparsity and item popularity
issues. The integration of the stochastic sampling and graph-based CF obtains
the state-of-the-art in personalized recommendation systems, making significant
strides in information-rich environments.
|
[
{
"created": "Wed, 1 May 2024 02:27:59 GMT",
"version": "v1"
}
] |
2024-05-02
|
[
[
"Lee",
"Chaejeong",
""
],
[
"Choi",
"Jeongwhan",
""
],
[
"Wi",
"Hyowon",
""
],
[
"Cho",
"Sung-Bae",
""
],
[
"Park",
"Noseong",
""
]
] |
Graph-based collaborative filtering (CF) has emerged as a promising approach in recommendation systems. Despite its achievements, graph-based CF models face challenges due to data sparsity and negative sampling. In this paper, we propose a novel Stochastic sampling for i) COntrastive views and ii) hard NEgative samples (SCONE) to overcome these issues. By considering that they are both sampling tasks, we generate dynamic augmented views and diverse hard negative samples via our unified stochastic sampling framework based on score-based generative models. In our comprehensive evaluations with 6 benchmark datasets, our proposed SCONE significantly improves recommendation accuracy and robustness, and demonstrates the superiority of our approach over existing CF models. Furthermore, we prove the efficacy of user-item specific stochastic sampling for addressing the user sparsity and item popularity issues. The integration of the stochastic sampling and graph-based CF obtains the state-of-the-art in personalized recommendation systems, making significant strides in information-rich environments.
|
1811.00677
|
Rafael Menelau Oliveira E Cruz
|
Rafael M. O. Cruz, Robert Sabourin, George D. C. Cavalcanti
|
Analyzing different prototype selection techniques for dynamic
classifier and ensemble selection
| null |
Published on the International Joint Conference on Neural
Networks, 2017, 3959-3966
|
10.1109/IJCNN.2017.7966355
| null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In dynamic selection (DS) techniques, only the most competent classifiers,
for the classification of a specific test sample are selected to predict the
sample's class labels. The more important step in DES techniques is estimating
the competence of the base classifiers for the classification of each specific
test sample. The classifiers' competence is usually estimated using the
neighborhood of the test sample defined on the validation samples, called the
region of competence. Thus, the performance of DS techniques is sensitive to
the distribution of the validation set. In this paper, we evaluate six
prototype selection techniques that work by editing the validation data in
order to remove noise and redundant instances. Experiments conducted using
several state-of-the-art DS techniques over 30 classification problems
demonstrate that by using prototype selection techniques we can improve the
classification accuracy of DS techniques and also significantly reduce the
computational cost involved.
|
[
{
"created": "Thu, 1 Nov 2018 23:34:10 GMT",
"version": "v1"
}
] |
2018-11-05
|
[
[
"Cruz",
"Rafael M. O.",
""
],
[
"Sabourin",
"Robert",
""
],
[
"Cavalcanti",
"George D. C.",
""
]
] |
In dynamic selection (DS) techniques, only the most competent classifiers, for the classification of a specific test sample are selected to predict the sample's class labels. The more important step in DES techniques is estimating the competence of the base classifiers for the classification of each specific test sample. The classifiers' competence is usually estimated using the neighborhood of the test sample defined on the validation samples, called the region of competence. Thus, the performance of DS techniques is sensitive to the distribution of the validation set. In this paper, we evaluate six prototype selection techniques that work by editing the validation data in order to remove noise and redundant instances. Experiments conducted using several state-of-the-art DS techniques over 30 classification problems demonstrate that by using prototype selection techniques we can improve the classification accuracy of DS techniques and also significantly reduce the computational cost involved.
|
2009.05487
|
Timo Freiesleben
|
Timo Freiesleben
|
The Intriguing Relation Between Counterfactual Explanations and
Adversarial Examples
| null | null |
10.1007/s11023-021-09580-9
| null |
cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The same method that creates adversarial examples (AEs) to fool
image-classifiers can be used to generate counterfactual explanations (CEs)
that explain algorithmic decisions. This observation has led researchers to
consider CEs as AEs by another name. We argue that the relationship to the true
label and the tolerance with respect to proximity are two properties that
formally distinguish CEs and AEs. Based on these arguments, we introduce CEs,
AEs, and related concepts mathematically in a common framework. Furthermore, we
show connections between current methods for generating CEs and AEs, and
estimate that the fields will merge more and more as the number of common
use-cases grows.
|
[
{
"created": "Fri, 11 Sep 2020 15:09:12 GMT",
"version": "v1"
},
{
"created": "Thu, 3 Dec 2020 10:05:17 GMT",
"version": "v2"
},
{
"created": "Thu, 26 Aug 2021 08:40:29 GMT",
"version": "v3"
}
] |
2021-11-03
|
[
[
"Freiesleben",
"Timo",
""
]
] |
The same method that creates adversarial examples (AEs) to fool image-classifiers can be used to generate counterfactual explanations (CEs) that explain algorithmic decisions. This observation has led researchers to consider CEs as AEs by another name. We argue that the relationship to the true label and the tolerance with respect to proximity are two properties that formally distinguish CEs and AEs. Based on these arguments, we introduce CEs, AEs, and related concepts mathematically in a common framework. Furthermore, we show connections between current methods for generating CEs and AEs, and estimate that the fields will merge more and more as the number of common use-cases grows.
|
2406.16527
|
Miguel Arana-Catania
|
Zheng Fang, Miguel Arana-Catania, Felix-Anselm van Lier, Juliana Outes
Velarde, Harry Bregazzi, Mara Airoldi, Eleanor Carter, Rob Procter
|
SyROCCo: Enhancing Systematic Reviews using Machine Learning
|
28 pages, 5 figures. To appear in Data & Policy journal
| null | null | null |
cs.CL cs.CY cs.DL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The sheer number of research outputs published every year makes systematic
reviewing increasingly time- and resource-intensive. This paper explores the
use of machine learning techniques to help navigate the systematic review
process. ML has previously been used to reliably 'screen' articles for review -
that is, identify relevant articles based on reviewers' inclusion criteria. The
application of ML techniques to subsequent stages of a review, however, such as
data extraction and evidence mapping, is in its infancy. We therefore set out
to develop a series of tools that would assist in the profiling and analysis of
1,952 publications on the theme of 'outcomes-based contracting'. Tools were
developed for the following tasks: assign publications into 'policy area'
categories; identify and extract key information for evidence mapping, such as
organisations, laws, and geographical information; connect the evidence base to
an existing dataset on the same topic; and identify subgroups of articles that
may share thematic content. An interactive tool using these techniques and a
public dataset with their outputs have been released. Our results demonstrate
the utility of ML techniques to enhance evidence accessibility and analysis
within the systematic review processes. These efforts show promise in
potentially yielding substantial efficiencies for future systematic reviewing
and for broadening their analytical scope. Our work suggests that there may be
implications for the ease with which policymakers and practitioners can access
evidence. While ML techniques seem poised to play a significant role in
bridging the gap between research and policy by offering innovative ways of
gathering, accessing, and analysing data from systematic reviews, we also
highlight their current limitations and the need to exercise caution in their
application, particularly given the potential for errors and biases.
|
[
{
"created": "Mon, 24 Jun 2024 11:04:43 GMT",
"version": "v1"
}
] |
2024-06-25
|
[
[
"Fang",
"Zheng",
""
],
[
"Arana-Catania",
"Miguel",
""
],
[
"van Lier",
"Felix-Anselm",
""
],
[
"Velarde",
"Juliana Outes",
""
],
[
"Bregazzi",
"Harry",
""
],
[
"Airoldi",
"Mara",
""
],
[
"Carter",
"Eleanor",
""
],
[
"Procter",
"Rob",
""
]
] |
The sheer number of research outputs published every year makes systematic reviewing increasingly time- and resource-intensive. This paper explores the use of machine learning techniques to help navigate the systematic review process. ML has previously been used to reliably 'screen' articles for review - that is, identify relevant articles based on reviewers' inclusion criteria. The application of ML techniques to subsequent stages of a review, however, such as data extraction and evidence mapping, is in its infancy. We therefore set out to develop a series of tools that would assist in the profiling and analysis of 1,952 publications on the theme of 'outcomes-based contracting'. Tools were developed for the following tasks: assign publications into 'policy area' categories; identify and extract key information for evidence mapping, such as organisations, laws, and geographical information; connect the evidence base to an existing dataset on the same topic; and identify subgroups of articles that may share thematic content. An interactive tool using these techniques and a public dataset with their outputs have been released. Our results demonstrate the utility of ML techniques to enhance evidence accessibility and analysis within the systematic review processes. These efforts show promise in potentially yielding substantial efficiencies for future systematic reviewing and for broadening their analytical scope. Our work suggests that there may be implications for the ease with which policymakers and practitioners can access evidence. While ML techniques seem poised to play a significant role in bridging the gap between research and policy by offering innovative ways of gathering, accessing, and analysing data from systematic reviews, we also highlight their current limitations and the need to exercise caution in their application, particularly given the potential for errors and biases.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.