id stringlengths 9 10 | submitter stringlengths 1 64 ⌀ | authors stringlengths 4 20.7k | title stringlengths 4 246 | comments stringlengths 1 523 ⌀ | journal-ref stringlengths 4 404 ⌀ | doi stringlengths 11 153 ⌀ | report-no stringlengths 2 254 ⌀ | categories stringlengths 5 98 | license stringclasses 9 values | orig_abstract stringlengths 14 3.35k | versions listlengths 1 60 | update_date stringlengths 10 10 | authors_parsed listlengths 1 1.35k | abstract stringlengths 11 3.34k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1612.09251 | Evgenia (Eugenia) Ternovska | Eugenia Ternovska | Lifted Relational Algebra with Recursion and Connections to Modal Logic | null | null | null | null | cs.LO cs.AI cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a new formalism for specifying and reasoning about problems that
involve heterogeneous "pieces of information" -- large collections of data,
decision procedures of any kind and complexity and connections between them.
The essence of our proposal is to lift Codd's relational algebra from
operations on relational tables to operations on classes of structures (with
recursion), and to add a direction of information propagation. We observe the
presence of information propagation in several formalisms for efficient
reasoning and use it to express unary negation and operations used in graph
databases. We carefully analyze several reasoning tasks and establish a precise
connection between a generalized query evaluation and temporal logic model
checking. Our development allows us to reveal a general correspondence between
classical and modal logics and may shed a new light on the good computational
properties of modal logics and related formalisms.
| [
{
"created": "Thu, 29 Dec 2016 19:17:31 GMT",
"version": "v1"
}
] | 2016-12-30 | [
[
"Ternovska",
"Eugenia",
""
]
] | We propose a new formalism for specifying and reasoning about problems that involve heterogeneous "pieces of information" -- large collections of data, decision procedures of any kind and complexity and connections between them. The essence of our proposal is to lift Codd's relational algebra from operations on relational tables to operations on classes of structures (with recursion), and to add a direction of information propagation. We observe the presence of information propagation in several formalisms for efficient reasoning and use it to express unary negation and operations used in graph databases. We carefully analyze several reasoning tasks and establish a precise connection between a generalized query evaluation and temporal logic model checking. Our development allows us to reveal a general correspondence between classical and modal logics and may shed a new light on the good computational properties of modal logics and related formalisms. |
1708.03792 | Debasish Pattanayak | Debasish Pattanayak, H. Ramesh, Partha Sarathi Mandal and Stefan
Schmid | Evacuating Two Robots from Two Unknown Exits on the Perimeter of a Disk | null | null | null | null | cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Distributed evacuation of mobile robots is a recent development. We consider
the evacuation problem of two robots which are initially located at the center
of a unit disk. Both the robots have to evacuate the disk through the exits
situated on the perimeter of the disk at an unknown location. The distance
between two exits along the perimeter $d$ is given. We consider two different
communication models. First, in the wireless model, the robots can send a
message to each other over a long distance. Second, in face-to-face
communication model, the robots can exchange information with each other only
when they touch each other. The objective of the evacuation problem is to
design an algorithm which minimizes the evacuation time of both the robots. For
the wireless communication model, we propose a generic algorithm for two robots
moving to two points on the perimeter with an initial separation of $\zeta \leq
d$. We also investigate evacuation problem for both unlabeled and labeled exits
in the wireless communication model. For the face-to-face communication model,
we propose two different algorithms for $\zeta =0$ and $\zeta =d$ for unlabeled
exits. We also propose a generic algorithm for $\zeta \leq d$ for labeled
exits. We provide lower bounds corresponding to different $d$ values in the
face-to-face communication model. We evaluate the performance our algorithms
with simulation for both of the communication models.
| [
{
"created": "Sat, 12 Aug 2017 16:04:17 GMT",
"version": "v1"
}
] | 2017-08-15 | [
[
"Pattanayak",
"Debasish",
""
],
[
"Ramesh",
"H.",
""
],
[
"Mandal",
"Partha Sarathi",
""
],
[
"Schmid",
"Stefan",
""
]
] | Distributed evacuation of mobile robots is a recent development. We consider the evacuation problem of two robots which are initially located at the center of a unit disk. Both the robots have to evacuate the disk through the exits situated on the perimeter of the disk at an unknown location. The distance between two exits along the perimeter $d$ is given. We consider two different communication models. First, in the wireless model, the robots can send a message to each other over a long distance. Second, in face-to-face communication model, the robots can exchange information with each other only when they touch each other. The objective of the evacuation problem is to design an algorithm which minimizes the evacuation time of both the robots. For the wireless communication model, we propose a generic algorithm for two robots moving to two points on the perimeter with an initial separation of $\zeta \leq d$. We also investigate evacuation problem for both unlabeled and labeled exits in the wireless communication model. For the face-to-face communication model, we propose two different algorithms for $\zeta =0$ and $\zeta =d$ for unlabeled exits. We also propose a generic algorithm for $\zeta \leq d$ for labeled exits. We provide lower bounds corresponding to different $d$ values in the face-to-face communication model. We evaluate the performance our algorithms with simulation for both of the communication models. |
2003.05898 | Michael Benedikt | Michael Benedikt, Stanislav Kikot, Piotr Ostropolski-Nalewaja, and
Miguel Romero | On monotonic determinacy and rewritability for recursive queries and
views | null | null | null | null | cs.LO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A query Q is monotonically determined over a set of views if Q can be
expressed as a monotonic function of the view image. In the case of relational
algebra views and queries, monotonic determinacy coincides with rewritability
as a union of conjunctive queries, and it is decidable in important special
cases, such as for CQ views and queries. We investigate the situation for views
and queries in the recursive query language Datalog. We give both positive and
negative results about the ability to decide monotonic determinacy, and also
about the co-incidence of monotonic determinacy with Datalog rewritability.
| [
{
"created": "Thu, 12 Mar 2020 16:56:13 GMT",
"version": "v1"
}
] | 2020-03-13 | [
[
"Benedikt",
"Michael",
""
],
[
"Kikot",
"Stanislav",
""
],
[
"Ostropolski-Nalewaja",
"Piotr",
""
],
[
"Romero",
"Miguel",
""
]
] | A query Q is monotonically determined over a set of views if Q can be expressed as a monotonic function of the view image. In the case of relational algebra views and queries, monotonic determinacy coincides with rewritability as a union of conjunctive queries, and it is decidable in important special cases, such as for CQ views and queries. We investigate the situation for views and queries in the recursive query language Datalog. We give both positive and negative results about the ability to decide monotonic determinacy, and also about the co-incidence of monotonic determinacy with Datalog rewritability. |
1909.04313 | Quang-Cuong Pham | Hung Pham and Quang-Cuong Pham | Convex Controller Synthesis for Robot Contact | 8 pages, 7 figures | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Controlling contacts is truly challenging, and this has been a major hurdle
to deploying industrial robots into unstructured/human-centric environments.
More specifically, the main challenges are: (i) how to ensure stability at all
times; (ii) how to satisfy task-specific performance specifications; (iii) how
to achieve (i) and (ii) under environment uncertainty, robot parameters
uncertainty, sensor and actuator time delays, external perturbations, etc.
Here, we propose a new approach -- Convex Controller Synthesis (CCS) -- to
tackle the above challenges based on robust control theory and convex
optimization. In two physical interaction tasks -- robot hand guiding and
sliding on surfaces with different and unknown stiffnesses -- we show that CCS
controllers outperform their classical counterparts in an essential way.
| [
{
"created": "Tue, 10 Sep 2019 06:26:28 GMT",
"version": "v1"
},
{
"created": "Thu, 16 Jan 2020 04:16:25 GMT",
"version": "v2"
}
] | 2020-01-17 | [
[
"Pham",
"Hung",
""
],
[
"Pham",
"Quang-Cuong",
""
]
] | Controlling contacts is truly challenging, and this has been a major hurdle to deploying industrial robots into unstructured/human-centric environments. More specifically, the main challenges are: (i) how to ensure stability at all times; (ii) how to satisfy task-specific performance specifications; (iii) how to achieve (i) and (ii) under environment uncertainty, robot parameters uncertainty, sensor and actuator time delays, external perturbations, etc. Here, we propose a new approach -- Convex Controller Synthesis (CCS) -- to tackle the above challenges based on robust control theory and convex optimization. In two physical interaction tasks -- robot hand guiding and sliding on surfaces with different and unknown stiffnesses -- we show that CCS controllers outperform their classical counterparts in an essential way. |
2205.12854 | Liyan Tang | Liyan Tang, Tanya Goyal, Alexander R. Fabbri, Philippe Laban, Jiacheng
Xu, Semih Yavuz, Wojciech Kry\'sci\'nski, Justin F. Rousseau, Greg Durrett | Understanding Factual Errors in Summarization: Errors, Summarizers,
Datasets, Error Detectors | Accepted to ACL 2023 | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The propensity of abstractive summarization models to make factual errors has
been studied extensively, including design of metrics to detect factual errors
and annotation of errors in current systems' outputs. However, the
ever-evolving nature of summarization systems, metrics, and annotated
benchmarks makes factuality evaluation a moving target, and drawing clear
comparisons among metrics has become increasingly difficult. In this work, we
aggregate factuality error annotations from nine existing datasets and stratify
them according to the underlying summarization model. We compare performance of
state-of-the-art factuality metrics, including recent ChatGPT-based metrics, on
this stratified benchmark and show that their performance varies significantly
across different types of summarization models. Critically, our analysis shows
that much of the recent improvement in the factuality detection space has been
on summaries from older (pre-Transformer) models instead of more relevant
recent summarization models. We further perform a finer-grained analysis per
error-type and find similar performance variance across error types for
different factuality metrics. Our results show that no one metric is superior
in all settings or for all error types, and we provide recommendations for best
practices given these insights.
| [
{
"created": "Wed, 25 May 2022 15:26:48 GMT",
"version": "v1"
},
{
"created": "Fri, 26 May 2023 00:21:51 GMT",
"version": "v2"
}
] | 2023-05-29 | [
[
"Tang",
"Liyan",
""
],
[
"Goyal",
"Tanya",
""
],
[
"Fabbri",
"Alexander R.",
""
],
[
"Laban",
"Philippe",
""
],
[
"Xu",
"Jiacheng",
""
],
[
"Yavuz",
"Semih",
""
],
[
"Kryściński",
"Wojciech",
""
],
[
"Rousseau",
"Justin F.",
""
],
[
"Durrett",
"Greg",
""
]
] | The propensity of abstractive summarization models to make factual errors has been studied extensively, including design of metrics to detect factual errors and annotation of errors in current systems' outputs. However, the ever-evolving nature of summarization systems, metrics, and annotated benchmarks makes factuality evaluation a moving target, and drawing clear comparisons among metrics has become increasingly difficult. In this work, we aggregate factuality error annotations from nine existing datasets and stratify them according to the underlying summarization model. We compare performance of state-of-the-art factuality metrics, including recent ChatGPT-based metrics, on this stratified benchmark and show that their performance varies significantly across different types of summarization models. Critically, our analysis shows that much of the recent improvement in the factuality detection space has been on summaries from older (pre-Transformer) models instead of more relevant recent summarization models. We further perform a finer-grained analysis per error-type and find similar performance variance across error types for different factuality metrics. Our results show that no one metric is superior in all settings or for all error types, and we provide recommendations for best practices given these insights. |
2001.09783 | Priyank Faldu | Priyank Faldu and Jeff Diamond and Boris Grot | Domain-Specialized Cache Management for Graph Analytics | No content changes from the previous version | null | null | null | cs.DC cs.PF | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Graph analytics power a range of applications in areas as diverse as finance,
networking and business logistics. A common property of graphs used in the
domain of graph analytics is a power-law distribution of vertex connectivity,
wherein a small number of vertices are responsible for a high fraction of all
connections in the graph. These richly-connected, hot, vertices inherently
exhibit high reuse. However, this work finds that state-of-the-art hardware
cache management schemes struggle in capitalizing on their reuse due to highly
irregular access patterns of graph analytics.
In response, we propose GRASP, domain-specialized cache management at the
last-level cache for graph analytics. GRASP augments existing cache policies to
maximize reuse of hot vertices by protecting them against cache thrashing,
while maintaining sufficient flexibility to capture the reuse of other vertices
as needed. GRASP keeps hardware cost negligible by leveraging lightweight
software support to pinpoint hot vertices, thus eliding the need for
storage-intensive prediction mechanisms employed by state-of-the-art cache
management schemes. On a set of diverse graph-analytic applications with large
high-skew graph datasets, GRASP outperforms prior domain-agnostic schemes on
all datapoints, yielding an average speed-up of 4.2% (max 9.4%) over the
best-performing prior scheme. GRASP remains robust on low-/no-skew datasets,
whereas prior schemes consistently cause a slowdown.
| [
{
"created": "Wed, 22 Jan 2020 18:46:26 GMT",
"version": "v1"
},
{
"created": "Tue, 28 Jan 2020 11:40:31 GMT",
"version": "v2"
}
] | 2020-01-29 | [
[
"Faldu",
"Priyank",
""
],
[
"Diamond",
"Jeff",
""
],
[
"Grot",
"Boris",
""
]
] | Graph analytics power a range of applications in areas as diverse as finance, networking and business logistics. A common property of graphs used in the domain of graph analytics is a power-law distribution of vertex connectivity, wherein a small number of vertices are responsible for a high fraction of all connections in the graph. These richly-connected, hot, vertices inherently exhibit high reuse. However, this work finds that state-of-the-art hardware cache management schemes struggle in capitalizing on their reuse due to highly irregular access patterns of graph analytics. In response, we propose GRASP, domain-specialized cache management at the last-level cache for graph analytics. GRASP augments existing cache policies to maximize reuse of hot vertices by protecting them against cache thrashing, while maintaining sufficient flexibility to capture the reuse of other vertices as needed. GRASP keeps hardware cost negligible by leveraging lightweight software support to pinpoint hot vertices, thus eliding the need for storage-intensive prediction mechanisms employed by state-of-the-art cache management schemes. On a set of diverse graph-analytic applications with large high-skew graph datasets, GRASP outperforms prior domain-agnostic schemes on all datapoints, yielding an average speed-up of 4.2% (max 9.4%) over the best-performing prior scheme. GRASP remains robust on low-/no-skew datasets, whereas prior schemes consistently cause a slowdown. |
2301.06474 | Geovana Ramos Sousa Silva | Geovana Ramos Sousa Silva and Edna Dias Canedo | Towards User-Centric Guidelines for Chatbot Conversational Design | International Journal of Human-Computer Interaction (2022) | null | 10.1080/10447318.2022.2118244 | null | cs.HC | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The conversational nature of chatbots poses challenges to designers since
their development is different from other software and requires investigating
new practices in the context of human-AI interaction and their impact on user
experience. Therefore, this work aims to unveil chatbot conversational
practices alongside their impacts on users to build a web guide to support
designers while conceiving conversations for chatbots. We have carried out a
Systematic Literature Review (SLR) to identify linguist, visual, and
interactive elements used in chatbot conversational design. The SLR resulted in
40 selected studies that were reviewed and coded into a set of conversational
guidelines that were evaluated through a survey. Respondents strongly agreed
that applying the proposed guidelines in chatbot development would induce
greater user satisfaction and user engagement and the guide is usable,
flexible, clear, and understandable, making it a great ally in building
chatbots with an improved user experience.
| [
{
"created": "Mon, 16 Jan 2023 15:24:42 GMT",
"version": "v1"
}
] | 2023-01-18 | [
[
"Silva",
"Geovana Ramos Sousa",
""
],
[
"Canedo",
"Edna Dias",
""
]
] | The conversational nature of chatbots poses challenges to designers since their development is different from other software and requires investigating new practices in the context of human-AI interaction and their impact on user experience. Therefore, this work aims to unveil chatbot conversational practices alongside their impacts on users to build a web guide to support designers while conceiving conversations for chatbots. We have carried out a Systematic Literature Review (SLR) to identify linguist, visual, and interactive elements used in chatbot conversational design. The SLR resulted in 40 selected studies that were reviewed and coded into a set of conversational guidelines that were evaluated through a survey. Respondents strongly agreed that applying the proposed guidelines in chatbot development would induce greater user satisfaction and user engagement and the guide is usable, flexible, clear, and understandable, making it a great ally in building chatbots with an improved user experience. |
2212.10869 | Ufuk Uyan | Ufuk Uyan, M. Tugberk Isyapar, Mahiye Uluyagmur Ozturk | 5G Long-Term and Large-Scale Mobile Traffic Forecasting | null | null | null | null | cs.LG cs.NI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | It is crucial for the service provider to comprehend and forecast mobile
traffic in large-scale cellular networks in order to govern and manage
mechanisms for base station placement, load balancing, and network planning.
The purpose of this article is to extract and simulate traffic patterns from
more than 14,000 cells that have been installed in different metropolitan
areas. To do this, we create, implement, and assess a method in which cells are
first categorized by their point of interest and then clustered based on the
temporal distribution of cells in each region. The proposed model has been
tested using real-world 5G mobile traffic datasets collected over 31 weeks in
various cities. We found that our proposed model performed well in predicting
mobile traffic patterns up to 2 weeks in advance. Our model outperformed the
base model in most areas of interest and generally achieved up to 15\% less
prediction error compared to the na\"ive approach. This indicates that our
approach is effective in predicting mobile traffic patterns in large-scale
cellular networks.
| [
{
"created": "Wed, 21 Dec 2022 09:26:33 GMT",
"version": "v1"
}
] | 2022-12-22 | [
[
"Uyan",
"Ufuk",
""
],
[
"Isyapar",
"M. Tugberk",
""
],
[
"Ozturk",
"Mahiye Uluyagmur",
""
]
] | It is crucial for the service provider to comprehend and forecast mobile traffic in large-scale cellular networks in order to govern and manage mechanisms for base station placement, load balancing, and network planning. The purpose of this article is to extract and simulate traffic patterns from more than 14,000 cells that have been installed in different metropolitan areas. To do this, we create, implement, and assess a method in which cells are first categorized by their point of interest and then clustered based on the temporal distribution of cells in each region. The proposed model has been tested using real-world 5G mobile traffic datasets collected over 31 weeks in various cities. We found that our proposed model performed well in predicting mobile traffic patterns up to 2 weeks in advance. Our model outperformed the base model in most areas of interest and generally achieved up to 15\% less prediction error compared to the na\"ive approach. This indicates that our approach is effective in predicting mobile traffic patterns in large-scale cellular networks. |
1610.07707 | Xiaowang Zhang | Xiaowang Zhang and Jiahui Zhang and Muhammad Qasim Yasin and Wenrui Wu
and Zhiyong Feng | Path discovery by Querying the federation of Relational Database and RDF
Graph | 11 pages | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The class of queries for detecting path is an important as those can extract
implicit binary relations over the nodes of input graphs. Most of the path
querying languages used by the RDF community, like property paths in W3C SPARQL
1.1 and nested regular expressions in nSPARQL are based on the regular
expressions. Federated queries allow for combining graph patterns and
relational database that enables the evaluations over several heterogeneous
data resources within a single query. Federated queries in W3C SPARQL 1.1
currently evaluated over different SPARQL endpoints. In this paper, we present
a federated path querying language as an extension of regular path querying
language for supporting RDF graph integration with relational database. The
federated path querying language is absolutely more expressive than nested
regular expressions and negation-free property paths. Its additional
expressivity can be used for capturing the conjunction and federation of nested
regular path queries. Despite the increase in expressivity, we also show that
federated path queries are still enjoy a low computational complexity and can
be evaluated efficiently.
| [
{
"created": "Tue, 25 Oct 2016 02:12:45 GMT",
"version": "v1"
}
] | 2016-10-26 | [
[
"Zhang",
"Xiaowang",
""
],
[
"Zhang",
"Jiahui",
""
],
[
"Yasin",
"Muhammad Qasim",
""
],
[
"Wu",
"Wenrui",
""
],
[
"Feng",
"Zhiyong",
""
]
] | The class of queries for detecting path is an important as those can extract implicit binary relations over the nodes of input graphs. Most of the path querying languages used by the RDF community, like property paths in W3C SPARQL 1.1 and nested regular expressions in nSPARQL are based on the regular expressions. Federated queries allow for combining graph patterns and relational database that enables the evaluations over several heterogeneous data resources within a single query. Federated queries in W3C SPARQL 1.1 currently evaluated over different SPARQL endpoints. In this paper, we present a federated path querying language as an extension of regular path querying language for supporting RDF graph integration with relational database. The federated path querying language is absolutely more expressive than nested regular expressions and negation-free property paths. Its additional expressivity can be used for capturing the conjunction and federation of nested regular path queries. Despite the increase in expressivity, we also show that federated path queries are still enjoy a low computational complexity and can be evaluated efficiently. |
1409.6611 | Bernhard Rumpe | Jean B\'ezivin, Bernhard Rumpe, Andy Sch\"urr, Laurence Tratt | Model Transformations in Practice Workshop (MTiP) | 8 pages, 4 figures | Satellite Events at the MoDELS 2005 Conference, MoDELS 2005. J-M
Bruel (Ed.), LNCS 3844. Springer, January 2006 | null | null | cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Model Transformations in Practice (MTiP) 2005 was a workshop which provided a
forum for the model transformation community to discuss practical model
transformation issues. Although many different model transformation approaches
have been proposed and explored in recent years, there has been little work on
comparing and contrasting various approaches. Without such comparisons, it is
hard to assess new model transformation approaches such as the upcoming OMG
MOF/QVT recommendation, or to discern sensible future paths for the area. Our
aims with the workshop were to create a forum that would help lead to an
increased understanding of the relative merits of different model
transformation techniques and approaches. A more advanced understanding of such
merits is of considerable benefit to both the model transformation and wider
modelling communities.
| [
{
"created": "Mon, 22 Sep 2014 17:12:43 GMT",
"version": "v1"
}
] | 2014-09-24 | [
[
"Bézivin",
"Jean",
""
],
[
"Rumpe",
"Bernhard",
""
],
[
"Schürr",
"Andy",
""
],
[
"Tratt",
"Laurence",
""
]
] | Model Transformations in Practice (MTiP) 2005 was a workshop which provided a forum for the model transformation community to discuss practical model transformation issues. Although many different model transformation approaches have been proposed and explored in recent years, there has been little work on comparing and contrasting various approaches. Without such comparisons, it is hard to assess new model transformation approaches such as the upcoming OMG MOF/QVT recommendation, or to discern sensible future paths for the area. Our aims with the workshop were to create a forum that would help lead to an increased understanding of the relative merits of different model transformation techniques and approaches. A more advanced understanding of such merits is of considerable benefit to both the model transformation and wider modelling communities. |
2110.01183 | Amogh Joshi | Amogh Joshi, Cody Buntain | Examining Similar and Ideologically Correlated Imagery in Online
Political Communication | null | null | null | null | cs.CY | http://creativecommons.org/licenses/by/4.0/ | This paper investigates visual media shared by US national politicians on
Twitter, how a politician's variety of image types shared reflects their
political position, and identifies a hazard in using standard methods for image
characterization in this context. While past work has yielded valuable results
on politicians' use of imagery in social media, that work has focused primarily
on photographic media, which may be insufficient given the variety of visual
media shared in such spaces (e.g., infographics, illustrations, or memes).
Leveraging multiple popular, pre-trained, deep-learning models to characterize
politicians' visuals, this work uses clustering to identify eight types of
visual media shared on Twitter, several of which are not photographic in
nature. Results show individual politicians share a variety of these types, and
the distributions of their imagery across these clusters is correlated with
their overall ideological position -- e.g., liberal politicians appear to share
a larger proportion of infographic-style images, and conservative politicians
appear to share more patriotic imagery. Manual assessment, however, reveals
that these image-characterization models often group visually similar images
with different semantic meaning into the same clusters, which has implications
for how researchers interpret clusters in this space and cluster-based
correlations with political ideology. In particular, collapsing semantic
meaning in these pre-trained models may drive null findings on certain clusters
of images rather than politicians across the ideological spectrum sharing
common types of imagery. We end this paper with a set of researcher
recommendations to prevent such issues.
| [
{
"created": "Mon, 4 Oct 2021 04:45:10 GMT",
"version": "v1"
},
{
"created": "Sat, 15 Jan 2022 19:03:43 GMT",
"version": "v2"
},
{
"created": "Mon, 31 Jul 2023 15:49:56 GMT",
"version": "v3"
}
] | 2023-08-01 | [
[
"Joshi",
"Amogh",
""
],
[
"Buntain",
"Cody",
""
]
] | This paper investigates visual media shared by US national politicians on Twitter, how a politician's variety of image types shared reflects their political position, and identifies a hazard in using standard methods for image characterization in this context. While past work has yielded valuable results on politicians' use of imagery in social media, that work has focused primarily on photographic media, which may be insufficient given the variety of visual media shared in such spaces (e.g., infographics, illustrations, or memes). Leveraging multiple popular, pre-trained, deep-learning models to characterize politicians' visuals, this work uses clustering to identify eight types of visual media shared on Twitter, several of which are not photographic in nature. Results show individual politicians share a variety of these types, and the distributions of their imagery across these clusters is correlated with their overall ideological position -- e.g., liberal politicians appear to share a larger proportion of infographic-style images, and conservative politicians appear to share more patriotic imagery. Manual assessment, however, reveals that these image-characterization models often group visually similar images with different semantic meaning into the same clusters, which has implications for how researchers interpret clusters in this space and cluster-based correlations with political ideology. In particular, collapsing semantic meaning in these pre-trained models may drive null findings on certain clusters of images rather than politicians across the ideological spectrum sharing common types of imagery. We end this paper with a set of researcher recommendations to prevent such issues. |
1111.7088 | Hao Shen | Martin Kleinsteuber and Hao Shen | Uniqueness Analysis of Non-Unitary Matrix Joint Diagonalization | 23 pages | null | null | null | cs.IT math.IT | http://creativecommons.org/licenses/by-nc-sa/3.0/ | Matrix Joint Diagonalization (MJD) is a powerful approach for solving the
Blind Source Separation (BSS) problem. It relies on the construction of
matrices which are diagonalized by the unknown demixing matrix. Their joint
diagonalizer serves as a correct estimate of this demixing matrix only if it is
uniquely determined. Thus, a critical question is under what conditions a joint
diagonalizer is unique. In the present work we fully answer this question about
the identifiability of MJD based BSS approaches and provide a general result on
uniqueness conditions of matrix joint diagonalization. It unifies all existing
results which exploit the concepts of non-circularity, non-stationarity,
non-whiteness, and non-Gaussianity. As a corollary, we propose a solution for
complex BSS, which can be formulated in a closed form in terms of an eigenvalue
and a singular value decomposition of two matrices.
| [
{
"created": "Wed, 30 Nov 2011 09:10:01 GMT",
"version": "v1"
},
{
"created": "Thu, 1 Dec 2011 07:44:15 GMT",
"version": "v2"
},
{
"created": "Wed, 4 Apr 2012 08:51:58 GMT",
"version": "v3"
}
] | 2012-04-05 | [
[
"Kleinsteuber",
"Martin",
""
],
[
"Shen",
"Hao",
""
]
] | Matrix Joint Diagonalization (MJD) is a powerful approach for solving the Blind Source Separation (BSS) problem. It relies on the construction of matrices which are diagonalized by the unknown demixing matrix. Their joint diagonalizer serves as a correct estimate of this demixing matrix only if it is uniquely determined. Thus, a critical question is under what conditions a joint diagonalizer is unique. In the present work we fully answer this question about the identifiability of MJD based BSS approaches and provide a general result on uniqueness conditions of matrix joint diagonalization. It unifies all existing results which exploit the concepts of non-circularity, non-stationarity, non-whiteness, and non-Gaussianity. As a corollary, we propose a solution for complex BSS, which can be formulated in a closed form in terms of an eigenvalue and a singular value decomposition of two matrices. |
1511.06241 | Aysegul Dundar | Aysegul Dundar, Jonghoon Jin and Eugenio Culurciello | Convolutional Clustering for Unsupervised Learning | 11 pages | null | null | null | cs.LG cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The task of labeling data for training deep neural networks is daunting and
tedious, requiring millions of labels to achieve the current state-of-the-art
results. Such reliance on large amounts of labeled data can be relaxed by
exploiting hierarchical features via unsupervised learning techniques. In this
work, we propose to train a deep convolutional network based on an enhanced
version of the k-means clustering algorithm, which reduces the number of
correlated parameters in the form of similar filters, and thus increases test
categorization accuracy. We call our algorithm convolutional k-means
clustering. We further show that learning the connection between the layers of
a deep convolutional neural network improves its ability to be trained on a
smaller amount of labeled data. Our experiments show that the proposed
algorithm outperforms other techniques that learn filters unsupervised.
Specifically, we obtained a test accuracy of 74.1% on STL-10 and a test error
of 0.5% on MNIST.
| [
{
"created": "Thu, 19 Nov 2015 16:31:46 GMT",
"version": "v1"
},
{
"created": "Tue, 16 Feb 2016 16:46:53 GMT",
"version": "v2"
}
] | 2016-02-17 | [
[
"Dundar",
"Aysegul",
""
],
[
"Jin",
"Jonghoon",
""
],
[
"Culurciello",
"Eugenio",
""
]
] | The task of labeling data for training deep neural networks is daunting and tedious, requiring millions of labels to achieve the current state-of-the-art results. Such reliance on large amounts of labeled data can be relaxed by exploiting hierarchical features via unsupervised learning techniques. In this work, we propose to train a deep convolutional network based on an enhanced version of the k-means clustering algorithm, which reduces the number of correlated parameters in the form of similar filters, and thus increases test categorization accuracy. We call our algorithm convolutional k-means clustering. We further show that learning the connection between the layers of a deep convolutional neural network improves its ability to be trained on a smaller amount of labeled data. Our experiments show that the proposed algorithm outperforms other techniques that learn filters unsupervised. Specifically, we obtained a test accuracy of 74.1% on STL-10 and a test error of 0.5% on MNIST. |
2306.15457 | Hong Joo Lee | Hong Joo Lee, Yong Man Ro | Robust Proxy: Improving Adversarial Robustness by Robust Proxy Learning | Accepted at IEEE Transactions on Information Forensics and Security
(TIFS) | null | 10.1109/TIFS.2023.3288672 | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, it has been widely known that deep neural networks are highly
vulnerable and easily broken by adversarial attacks. To mitigate the
adversarial vulnerability, many defense algorithms have been proposed.
Recently, to improve adversarial robustness, many works try to enhance feature
representation by imposing more direct supervision on the discriminative
feature. However, existing approaches lack an understanding of learning
adversarially robust feature representation. In this paper, we propose a novel
training framework called Robust Proxy Learning. In the proposed method, the
model explicitly learns robust feature representations with robust proxies. To
this end, firstly, we demonstrate that we can generate class-representative
robust features by adding class-wise robust perturbations. Then, we use the
class representative features as robust proxies. With the class-wise robust
features, the model explicitly learns adversarially robust features through the
proposed robust proxy learning framework. Through extensive experiments, we
verify that we can manually generate robust features, and our proposed learning
framework could increase the robustness of the DNNs.
| [
{
"created": "Tue, 27 Jun 2023 13:22:19 GMT",
"version": "v1"
}
] | 2023-06-28 | [
[
"Lee",
"Hong Joo",
""
],
[
"Ro",
"Yong Man",
""
]
] | Recently, it has been widely known that deep neural networks are highly vulnerable and easily broken by adversarial attacks. To mitigate the adversarial vulnerability, many defense algorithms have been proposed. Recently, to improve adversarial robustness, many works try to enhance feature representation by imposing more direct supervision on the discriminative feature. However, existing approaches lack an understanding of learning adversarially robust feature representation. In this paper, we propose a novel training framework called Robust Proxy Learning. In the proposed method, the model explicitly learns robust feature representations with robust proxies. To this end, firstly, we demonstrate that we can generate class-representative robust features by adding class-wise robust perturbations. Then, we use the class representative features as robust proxies. With the class-wise robust features, the model explicitly learns adversarially robust features through the proposed robust proxy learning framework. Through extensive experiments, we verify that we can manually generate robust features, and our proposed learning framework could increase the robustness of the DNNs. |
2110.05594 | Dr. Suryansh Kumar | Berk Kaya, Suryansh Kumar, Francesco Sarno, Vittorio Ferrari, Luc Van
Gool | Neural Radiance Fields Approach to Deep Multi-View Photometric Stereo | Accepted for publication at IEEE/CVF WACV 2022. 18 pages | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a modern solution to the multi-view photometric stereo problem
(MVPS). Our work suitably exploits the image formation model in a MVPS
experimental setup to recover the dense 3D reconstruction of an object from
images. We procure the surface orientation using a photometric stereo (PS)
image formation model and blend it with a multi-view neural radiance field
representation to recover the object's surface geometry. Contrary to the
previous multi-staged framework to MVPS, where the position, iso-depth
contours, or orientation measurements are estimated independently and then
fused later, our method is simple to implement and realize. Our method performs
neural rendering of multi-view images while utilizing surface normals estimated
by a deep photometric stereo network. We render the MVPS images by considering
the object's surface normals for each 3D sample point along the viewing
direction rather than explicitly using the density gradient in the volume space
via 3D occupancy information. We optimize the proposed neural radiance field
representation for the MVPS setup efficiently using a fully connected deep
network to recover the 3D geometry of an object. Extensive evaluation on the
DiLiGenT-MV benchmark dataset shows that our method performs better than the
approaches that perform only PS or only multi-view stereo (MVS) and provides
comparable results against the state-of-the-art multi-stage fusion methods.
| [
{
"created": "Mon, 11 Oct 2021 20:20:03 GMT",
"version": "v1"
}
] | 2021-10-13 | [
[
"Kaya",
"Berk",
""
],
[
"Kumar",
"Suryansh",
""
],
[
"Sarno",
"Francesco",
""
],
[
"Ferrari",
"Vittorio",
""
],
[
"Van Gool",
"Luc",
""
]
] | We present a modern solution to the multi-view photometric stereo problem (MVPS). Our work suitably exploits the image formation model in a MVPS experimental setup to recover the dense 3D reconstruction of an object from images. We procure the surface orientation using a photometric stereo (PS) image formation model and blend it with a multi-view neural radiance field representation to recover the object's surface geometry. Contrary to the previous multi-staged framework to MVPS, where the position, iso-depth contours, or orientation measurements are estimated independently and then fused later, our method is simple to implement and realize. Our method performs neural rendering of multi-view images while utilizing surface normals estimated by a deep photometric stereo network. We render the MVPS images by considering the object's surface normals for each 3D sample point along the viewing direction rather than explicitly using the density gradient in the volume space via 3D occupancy information. We optimize the proposed neural radiance field representation for the MVPS setup efficiently using a fully connected deep network to recover the 3D geometry of an object. Extensive evaluation on the DiLiGenT-MV benchmark dataset shows that our method performs better than the approaches that perform only PS or only multi-view stereo (MVS) and provides comparable results against the state-of-the-art multi-stage fusion methods. |
1309.0719 | David Bryson | David M. Bryson and Charles Ofria | Understanding Evolutionary Potential in Virtual CPU Instruction Set
Architectures | null | PLOS ONE 8(12): e83242. (2013) | 10.1371/journal.pone.0083242 | null | cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We investigate fundamental decisions in the design of instruction set
architectures for linear genetic programs that are used as both model systems
in evolutionary biology and underlying solution representations in evolutionary
computation. We subjected digital organisms with each tested architecture to
seven different computational environments designed to present a range of
evolutionary challenges. Our goal was to engineer a general purpose
architecture that would be effective under a broad range of evolutionary
conditions. We evaluated six different types of architectural features for the
virtual CPUs: (1) genetic flexibility: we allowed digital organisms to more
precisely modify the function of genetic instructions, (2) memory: we provided
an increased number of registers in the virtual CPUs, (3) decoupled sensors and
actuators: we separated input and output operations to enable greater control
over data flow. We also tested a variety of methods to regulate expression: (4)
explicit labels that allow programs to dynamically refer to specific genome
positions, (5) position-relative search instructions, and (6) multiple new flow
control instructions, including conditionals and jumps. Each of these features
also adds complication to the instruction set and risks slowing evolution due
to epistatic interactions. Two features (multiple argument specification and
separated I/O) demonstrated substantial improvements int the majority of test
environments. Some of the remaining tested modifications were detrimental,
thought most exhibit no systematic effects on evolutionary potential,
highlighting the robustness of digital evolution. Combined, these observations
enhance our understanding of how instruction architecture impacts evolutionary
potential, enabling the creation of architectures that support more rapid
evolution of complex solutions to a broad range of challenges.
| [
{
"created": "Tue, 3 Sep 2013 15:17:07 GMT",
"version": "v1"
},
{
"created": "Mon, 28 Oct 2013 17:41:23 GMT",
"version": "v2"
}
] | 2013-12-30 | [
[
"Bryson",
"David M.",
""
],
[
"Ofria",
"Charles",
""
]
] | We investigate fundamental decisions in the design of instruction set architectures for linear genetic programs that are used as both model systems in evolutionary biology and underlying solution representations in evolutionary computation. We subjected digital organisms with each tested architecture to seven different computational environments designed to present a range of evolutionary challenges. Our goal was to engineer a general purpose architecture that would be effective under a broad range of evolutionary conditions. We evaluated six different types of architectural features for the virtual CPUs: (1) genetic flexibility: we allowed digital organisms to more precisely modify the function of genetic instructions, (2) memory: we provided an increased number of registers in the virtual CPUs, (3) decoupled sensors and actuators: we separated input and output operations to enable greater control over data flow. We also tested a variety of methods to regulate expression: (4) explicit labels that allow programs to dynamically refer to specific genome positions, (5) position-relative search instructions, and (6) multiple new flow control instructions, including conditionals and jumps. Each of these features also adds complication to the instruction set and risks slowing evolution due to epistatic interactions. Two features (multiple argument specification and separated I/O) demonstrated substantial improvements int the majority of test environments. Some of the remaining tested modifications were detrimental, thought most exhibit no systematic effects on evolutionary potential, highlighting the robustness of digital evolution. Combined, these observations enhance our understanding of how instruction architecture impacts evolutionary potential, enabling the creation of architectures that support more rapid evolution of complex solutions to a broad range of challenges. |
2112.09810 | Kaize Ding | Kaize Ding, Jianling Wang, James Caverlee and Huan Liu | Meta Propagation Networks for Graph Few-shot Semi-supervised Learning | Accepted by AAAI2022 | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Inspired by the extensive success of deep learning, graph neural networks
(GNNs) have been proposed to learn expressive node representations and
demonstrated promising performance in various graph learning tasks. However,
existing endeavors predominately focus on the conventional semi-supervised
setting where relatively abundant gold-labeled nodes are provided. While it is
often impractical due to the fact that data labeling is unbearably laborious
and requires intensive domain knowledge, especially when considering the
heterogeneity of graph-structured data. Under the few-shot semi-supervised
setting, the performance of most of the existing GNNs is inevitably undermined
by the overfitting and oversmoothing issues, largely owing to the shortage of
labeled data. In this paper, we propose a decoupled network architecture
equipped with a novel meta-learning algorithm to solve this problem. In
essence, our framework Meta-PN infers high-quality pseudo labels on unlabeled
nodes via a meta-learned label propagation strategy, which effectively augments
the scarce labeled data while enabling large receptive fields during training.
Extensive experiments demonstrate that our approach offers easy and substantial
performance gains compared to existing techniques on various benchmark
datasets.
| [
{
"created": "Sat, 18 Dec 2021 00:11:56 GMT",
"version": "v1"
},
{
"created": "Fri, 1 Apr 2022 22:06:43 GMT",
"version": "v2"
}
] | 2022-04-05 | [
[
"Ding",
"Kaize",
""
],
[
"Wang",
"Jianling",
""
],
[
"Caverlee",
"James",
""
],
[
"Liu",
"Huan",
""
]
] | Inspired by the extensive success of deep learning, graph neural networks (GNNs) have been proposed to learn expressive node representations and demonstrated promising performance in various graph learning tasks. However, existing endeavors predominately focus on the conventional semi-supervised setting where relatively abundant gold-labeled nodes are provided. While it is often impractical due to the fact that data labeling is unbearably laborious and requires intensive domain knowledge, especially when considering the heterogeneity of graph-structured data. Under the few-shot semi-supervised setting, the performance of most of the existing GNNs is inevitably undermined by the overfitting and oversmoothing issues, largely owing to the shortage of labeled data. In this paper, we propose a decoupled network architecture equipped with a novel meta-learning algorithm to solve this problem. In essence, our framework Meta-PN infers high-quality pseudo labels on unlabeled nodes via a meta-learned label propagation strategy, which effectively augments the scarce labeled data while enabling large receptive fields during training. Extensive experiments demonstrate that our approach offers easy and substantial performance gains compared to existing techniques on various benchmark datasets. |
1908.07831 | Hong-Ren Mao | Hongren Mao, Hung-yi Lee | Polly Want a Cracker: Analyzing Performance of Parroting on Paraphrase
Generation Datasets | Accepted for EMNLP 2019 | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Paraphrase generation is an interesting and challenging NLP task which has
numerous practical applications. In this paper, we analyze datasets commonly
used for paraphrase generation research, and show that simply parroting input
sentences surpasses state-of-the-art models in the literature when evaluated on
standard metrics. Our findings illustrate that a model could be seemingly adept
at generating paraphrases, despite only making trivial changes to the input
sentence or even none at all.
| [
{
"created": "Mon, 19 Aug 2019 05:40:42 GMT",
"version": "v1"
}
] | 2019-08-22 | [
[
"Mao",
"Hongren",
""
],
[
"Lee",
"Hung-yi",
""
]
] | Paraphrase generation is an interesting and challenging NLP task which has numerous practical applications. In this paper, we analyze datasets commonly used for paraphrase generation research, and show that simply parroting input sentences surpasses state-of-the-art models in the literature when evaluated on standard metrics. Our findings illustrate that a model could be seemingly adept at generating paraphrases, despite only making trivial changes to the input sentence or even none at all. |
1904.11753 | Naoto Sato | Naoto Sato, Hironobu Kuruma, Yuichiroh Nakagawa, Hideto Ogawa | Formal Verification of Decision-Tree Ensemble Model and Detection of its
Violating-input-value Ranges | null | IEICE Transaction D, Feb, 2020 | 10.1587/transinf.2019EDP7120 | Vol.E103-D, No.02, pp.363-378 | cs.SE cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As one type of machine-learning model, a "decision-tree ensemble model"
(DTEM) is represented by a set of decision trees. A DTEM is mainly known to be
valid for structured data; however, like other machine-learning models, it is
difficult to train so that it returns the correct output value for any input
value. Accordingly, when a DTEM is used in regard to a system that requires
reliability, it is important to comprehensively detect input values that lead
to malfunctions of a system (failures) during development and take appropriate
measures. One conceivable solution is to install an input filter that controls
the input to the DTEM, and to use separate software to process input values
that may lead to failures. To develop the input filter, it is necessary to
specify the filtering condition of the input value that leads to the
malfunction of the system. Given that necessity, in this paper, we propose a
method for formally verifying a DTEM and, according to the result of the
verification, if an input value leading to a failure is found, extracting the
range in which such an input value exists. The proposed method can
comprehensively extract the range in which the input value leading to the
failure exists; therefore, by creating an input filter based on that range, it
is possible to prevent the failure occurring in the system. In this paper, the
algorithm of the proposed method is described, and the results of a case study
using a dataset of house prices are presented. On the basis of those results,
the feasibility of the proposed method is demonstrated, and its scalability is
evaluated.
| [
{
"created": "Fri, 26 Apr 2019 10:38:01 GMT",
"version": "v1"
}
] | 2020-05-25 | [
[
"Sato",
"Naoto",
""
],
[
"Kuruma",
"Hironobu",
""
],
[
"Nakagawa",
"Yuichiroh",
""
],
[
"Ogawa",
"Hideto",
""
]
] | As one type of machine-learning model, a "decision-tree ensemble model" (DTEM) is represented by a set of decision trees. A DTEM is mainly known to be valid for structured data; however, like other machine-learning models, it is difficult to train so that it returns the correct output value for any input value. Accordingly, when a DTEM is used in regard to a system that requires reliability, it is important to comprehensively detect input values that lead to malfunctions of a system (failures) during development and take appropriate measures. One conceivable solution is to install an input filter that controls the input to the DTEM, and to use separate software to process input values that may lead to failures. To develop the input filter, it is necessary to specify the filtering condition of the input value that leads to the malfunction of the system. Given that necessity, in this paper, we propose a method for formally verifying a DTEM and, according to the result of the verification, if an input value leading to a failure is found, extracting the range in which such an input value exists. The proposed method can comprehensively extract the range in which the input value leading to the failure exists; therefore, by creating an input filter based on that range, it is possible to prevent the failure occurring in the system. In this paper, the algorithm of the proposed method is described, and the results of a case study using a dataset of house prices are presented. On the basis of those results, the feasibility of the proposed method is demonstrated, and its scalability is evaluated. |
2304.14226 | Yueming Hao | Yueming Hao, Xu Zhao, Bin Bao, David Berard, Will Constable, Adnan
Aziz, Xu Liu | TorchBench: Benchmarking PyTorch with High API Surface Coverage | null | null | null | null | cs.LG cs.AI cs.PF | http://creativecommons.org/licenses/by/4.0/ | Deep learning (DL) has been a revolutionary technique in various domains. To
facilitate the model development and deployment, many deep learning frameworks
are proposed, among which PyTorch is one of the most popular solutions. The
performance of ecosystem around PyTorch is critically important, which saves
the costs of training models and reduces the response time of model inferences.
In this paper, we propose TorchBench, a novel benchmark suite to study the
performance of PyTorch software stack. Unlike existing benchmark suites,
TorchBench encloses many representative models, covering a large PyTorch API
surface. TorchBench is able to comprehensively characterize the performance of
the PyTorch software stack, guiding the performance optimization across models,
PyTorch framework, and GPU libraries. We show two practical use cases of
TorchBench. (1) We profile TorchBench to identify GPU performance
inefficiencies in PyTorch. We are able to optimize many performance bugs and
upstream patches to the official PyTorch repository. (2) We integrate
TorchBench into PyTorch continuous integration system. We are able to identify
performance regression in multiple daily code checkins to prevent PyTorch
repository from introducing performance bugs. TorchBench is open source and
keeps evolving.
| [
{
"created": "Thu, 27 Apr 2023 14:37:05 GMT",
"version": "v1"
},
{
"created": "Fri, 28 Apr 2023 19:56:19 GMT",
"version": "v2"
},
{
"created": "Sat, 24 Jun 2023 16:57:43 GMT",
"version": "v3"
}
] | 2023-06-27 | [
[
"Hao",
"Yueming",
""
],
[
"Zhao",
"Xu",
""
],
[
"Bao",
"Bin",
""
],
[
"Berard",
"David",
""
],
[
"Constable",
"Will",
""
],
[
"Aziz",
"Adnan",
""
],
[
"Liu",
"Xu",
""
]
] | Deep learning (DL) has been a revolutionary technique in various domains. To facilitate the model development and deployment, many deep learning frameworks are proposed, among which PyTorch is one of the most popular solutions. The performance of ecosystem around PyTorch is critically important, which saves the costs of training models and reduces the response time of model inferences. In this paper, we propose TorchBench, a novel benchmark suite to study the performance of PyTorch software stack. Unlike existing benchmark suites, TorchBench encloses many representative models, covering a large PyTorch API surface. TorchBench is able to comprehensively characterize the performance of the PyTorch software stack, guiding the performance optimization across models, PyTorch framework, and GPU libraries. We show two practical use cases of TorchBench. (1) We profile TorchBench to identify GPU performance inefficiencies in PyTorch. We are able to optimize many performance bugs and upstream patches to the official PyTorch repository. (2) We integrate TorchBench into PyTorch continuous integration system. We are able to identify performance regression in multiple daily code checkins to prevent PyTorch repository from introducing performance bugs. TorchBench is open source and keeps evolving. |
2111.08774 | Pinelopi Papalampidi | Pinelopi Papalampidi, Frank Keller, Mirella Lapata | Film Trailer Generation via Task Decomposition | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-sa/4.0/ | Movie trailers perform multiple functions: they introduce viewers to the
story, convey the mood and artistic style of the film, and encourage audiences
to see the movie. These diverse functions make automatic trailer generation a
challenging endeavor. We decompose it into two subtasks: narrative structure
identification and sentiment prediction. We model movies as graphs, where nodes
are shots and edges denote semantic relations between them. We learn these
relations using joint contrastive training which leverages privileged textual
information (e.g., characters, actions, situations) from screenplays. An
unsupervised algorithm then traverses the graph and generates trailers that
human judges prefer to ones generated by competitive supervised approaches.
| [
{
"created": "Tue, 16 Nov 2021 20:50:52 GMT",
"version": "v1"
}
] | 2021-11-18 | [
[
"Papalampidi",
"Pinelopi",
""
],
[
"Keller",
"Frank",
""
],
[
"Lapata",
"Mirella",
""
]
] | Movie trailers perform multiple functions: they introduce viewers to the story, convey the mood and artistic style of the film, and encourage audiences to see the movie. These diverse functions make automatic trailer generation a challenging endeavor. We decompose it into two subtasks: narrative structure identification and sentiment prediction. We model movies as graphs, where nodes are shots and edges denote semantic relations between them. We learn these relations using joint contrastive training which leverages privileged textual information (e.g., characters, actions, situations) from screenplays. An unsupervised algorithm then traverses the graph and generates trailers that human judges prefer to ones generated by competitive supervised approaches. |
2405.13483 | Jiancheng Tang | Jiancheng Tang, Qianqian Yang, Deniz G\"und\"uz | Distributed Indirect Source Coding with Decoder Side Information | null | null | null | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper studies a variant of the rate-distortion problem motivated by
task-oriented semantic communication and distributed learning problems, where
$M$ correlated sources are independently encoded for a central decoder. The
decoder has access to a correlated side information in addition to the messages
received from the encoders, and aims to recover a latent random variable
correlated with the sources observed by the encoders within a given distortion
constraint rather than recovering the sources themselves. We provide bounds on
the rate-distortion region for this scenario in general, and characterize the
rate-distortion function exactly when the sources are conditionally independent
given the side information.
| [
{
"created": "Wed, 22 May 2024 09:48:21 GMT",
"version": "v1"
}
] | 2024-05-24 | [
[
"Tang",
"Jiancheng",
""
],
[
"Yang",
"Qianqian",
""
],
[
"Gündüz",
"Deniz",
""
]
] | This paper studies a variant of the rate-distortion problem motivated by task-oriented semantic communication and distributed learning problems, where $M$ correlated sources are independently encoded for a central decoder. The decoder has access to a correlated side information in addition to the messages received from the encoders, and aims to recover a latent random variable correlated with the sources observed by the encoders within a given distortion constraint rather than recovering the sources themselves. We provide bounds on the rate-distortion region for this scenario in general, and characterize the rate-distortion function exactly when the sources are conditionally independent given the side information. |
2312.10212 | Fabian Hinder | Fabian Hinder, Valerie Vaquet, Barbara Hammer | A Remark on Concept Drift for Dependent Data | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Concept drift, i.e., the change of the data generating distribution, can
render machine learning models inaccurate. Several works address the phenomenon
of concept drift in the streaming context usually assuming that consecutive
data points are independent of each other. To generalize to dependent data,
many authors link the notion of concept drift to time series. In this work, we
show that the temporal dependencies are strongly influencing the sampling
process. Thus, the used definitions need major modifications. In particular, we
show that the notion of stationarity is not suited for this setup and discuss
alternatives. We demonstrate that these alternative formal notions describe the
observable learning behavior in numerical experiments.
| [
{
"created": "Fri, 15 Dec 2023 21:11:46 GMT",
"version": "v1"
}
] | 2023-12-19 | [
[
"Hinder",
"Fabian",
""
],
[
"Vaquet",
"Valerie",
""
],
[
"Hammer",
"Barbara",
""
]
] | Concept drift, i.e., the change of the data generating distribution, can render machine learning models inaccurate. Several works address the phenomenon of concept drift in the streaming context usually assuming that consecutive data points are independent of each other. To generalize to dependent data, many authors link the notion of concept drift to time series. In this work, we show that the temporal dependencies are strongly influencing the sampling process. Thus, the used definitions need major modifications. In particular, we show that the notion of stationarity is not suited for this setup and discuss alternatives. We demonstrate that these alternative formal notions describe the observable learning behavior in numerical experiments. |
1701.01547 | Arun Singh | Arun Kumar Singh, Sigal Berman and Ilana Nisky | Stochastic Optimal Control for Modeling Reaching Movements in the
Presence of Obstacles: Theory and Simulation | null | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In many human-in-the-loop robotic applications such as robot-assisted surgery
and remote teleoperation, predicting the intended motion of the human operator
may be useful for successful implementation of shared control, guidance virtual
fixtures, and predictive control. Developing computational models of human
movements is a critical foundation for such motion prediction frameworks. With
this motivation, we present a computational framework for modeling reaching
movements in the presence of obstacles. We propose a stochastic optimal control
framework that consists of probabilistic collision avoidance constraints and a
cost function that trades-off between effort and end-state variance in the
presence of a signal-dependent noise. First, we present a series of
reformulations to convert the original non-linear and non-convex optimal
control into a parametric quadratic programming problem. We show that the
parameters can be tuned to model various collision avoidance strategies,
thereby capturing the quintessential variability associated with human motion.
Then, we present a simulation study that demonstrates the complex interaction
between avoidance strategies, control cost, and the probability of collision
avoidance. The proposed framework can benefit a variety of applications that
require teleoperation in cluttered spaces, including robot-assisted surgery. In
addition, it can also be viewed as a new optimizer which produces smooth and
probabilistically-safe trajectories under signal dependent noise.
| [
{
"created": "Fri, 6 Jan 2017 05:27:38 GMT",
"version": "v1"
},
{
"created": "Mon, 26 Mar 2018 20:27:30 GMT",
"version": "v2"
}
] | 2018-03-28 | [
[
"Singh",
"Arun Kumar",
""
],
[
"Berman",
"Sigal",
""
],
[
"Nisky",
"Ilana",
""
]
] | In many human-in-the-loop robotic applications such as robot-assisted surgery and remote teleoperation, predicting the intended motion of the human operator may be useful for successful implementation of shared control, guidance virtual fixtures, and predictive control. Developing computational models of human movements is a critical foundation for such motion prediction frameworks. With this motivation, we present a computational framework for modeling reaching movements in the presence of obstacles. We propose a stochastic optimal control framework that consists of probabilistic collision avoidance constraints and a cost function that trades-off between effort and end-state variance in the presence of a signal-dependent noise. First, we present a series of reformulations to convert the original non-linear and non-convex optimal control into a parametric quadratic programming problem. We show that the parameters can be tuned to model various collision avoidance strategies, thereby capturing the quintessential variability associated with human motion. Then, we present a simulation study that demonstrates the complex interaction between avoidance strategies, control cost, and the probability of collision avoidance. The proposed framework can benefit a variety of applications that require teleoperation in cluttered spaces, including robot-assisted surgery. In addition, it can also be viewed as a new optimizer which produces smooth and probabilistically-safe trajectories under signal dependent noise. |
2010.13415 | Yucheng Wang | Yucheng Wang, Bowen Yu, Yueyang Zhang, Tingwen Liu, Hongsong Zhu and
Limin Sun | TPLinker: Single-stage Joint Extraction of Entities and Relations
Through Token Pair Linking | COLING 2020 | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Extracting entities and relations from unstructured text has attracted
increasing attention in recent years but remains challenging, due to the
intrinsic difficulty in identifying overlapping relations with shared entities.
Prior works show that joint learning can result in a noticeable performance
gain. However, they usually involve sequential interrelated steps and suffer
from the problem of exposure bias. At training time, they predict with the
ground truth conditions while at inference it has to make extraction from
scratch. This discrepancy leads to error accumulation. To mitigate the issue,
we propose in this paper a one-stage joint extraction model, namely, TPLinker,
which is capable of discovering overlapping relations sharing one or both
entities while immune from the exposure bias. TPLinker formulates joint
extraction as a token pair linking problem and introduces a novel handshaking
tagging scheme that aligns the boundary tokens of entity pairs under each
relation type. Experiment results show that TPLinker performs significantly
better on overlapping and multiple relation extraction, and achieves
state-of-the-art performance on two public datasets.
| [
{
"created": "Mon, 26 Oct 2020 08:35:06 GMT",
"version": "v1"
}
] | 2020-10-27 | [
[
"Wang",
"Yucheng",
""
],
[
"Yu",
"Bowen",
""
],
[
"Zhang",
"Yueyang",
""
],
[
"Liu",
"Tingwen",
""
],
[
"Zhu",
"Hongsong",
""
],
[
"Sun",
"Limin",
""
]
] | Extracting entities and relations from unstructured text has attracted increasing attention in recent years but remains challenging, due to the intrinsic difficulty in identifying overlapping relations with shared entities. Prior works show that joint learning can result in a noticeable performance gain. However, they usually involve sequential interrelated steps and suffer from the problem of exposure bias. At training time, they predict with the ground truth conditions while at inference it has to make extraction from scratch. This discrepancy leads to error accumulation. To mitigate the issue, we propose in this paper a one-stage joint extraction model, namely, TPLinker, which is capable of discovering overlapping relations sharing one or both entities while immune from the exposure bias. TPLinker formulates joint extraction as a token pair linking problem and introduces a novel handshaking tagging scheme that aligns the boundary tokens of entity pairs under each relation type. Experiment results show that TPLinker performs significantly better on overlapping and multiple relation extraction, and achieves state-of-the-art performance on two public datasets. |
cs/0404039 | Alexei Kaltchenko | Alexei Kaltchenko | Algorithms for Estimating Information Distance with Application to
Bioinformatics and Linguistics | 4 pages | null | null | null | cs.CC cs.CE q-bio.GN | null | After reviewing unnormalized and normalized information distances based on
incomputable notions of Kolmogorov complexity, we discuss how Kolmogorov
complexity can be approximated by data compression algorithms. We argue that
optimal algorithms for data compression with side information can be
successfully used to approximate the normalized distance. Next, we discuss an
alternative information distance, which is based on relative entropy rate (also
known as Kullback-Leibler divergence), and compression-based algorithms for its
estimation. Based on available biological and linguistic data, we arrive to
unexpected conclusion that in Bioinformatics and Computational Linguistics this
alternative distance is more relevant and important than the ones based on
Kolmogorov complexity.
| [
{
"created": "Tue, 20 Apr 2004 15:18:43 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Kaltchenko",
"Alexei",
""
]
] | After reviewing unnormalized and normalized information distances based on incomputable notions of Kolmogorov complexity, we discuss how Kolmogorov complexity can be approximated by data compression algorithms. We argue that optimal algorithms for data compression with side information can be successfully used to approximate the normalized distance. Next, we discuss an alternative information distance, which is based on relative entropy rate (also known as Kullback-Leibler divergence), and compression-based algorithms for its estimation. Based on available biological and linguistic data, we arrive to unexpected conclusion that in Bioinformatics and Computational Linguistics this alternative distance is more relevant and important than the ones based on Kolmogorov complexity. |
2003.09638 | Chuan Chen | Dalong Yang, Chuan Chen, Youhao Zheng, Zibin Zheng, Shih-wei Liao | An Uncoupled Training Architecture for Large Graph Learning | null | null | null | null | cs.LG cs.SI stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Graph Convolutional Network (GCN) has been widely used in graph learning
tasks. However, GCN-based models (GCNs) is an inherently coupled training
framework repetitively conducting the complex neighboring aggregation, which
leads to the limitation of flexibility in processing large-scale graph. With
the depth of layers increases, the computational and memory cost of GCNs grow
explosively due to the recursive neighborhood expansion. To tackle these
issues, we present Node2Grids, a flexible uncoupled training framework that
leverages the independent mapped data for obtaining the embedding. Instead of
directly processing the coupled nodes as GCNs, Node2Grids supports a more
efficacious method in practice, mapping the coupled graph data into the
independent grid-like data which can be fed into the efficient Convolutional
Neural Network (CNN). This simple but valid strategy significantly saves memory
and computational resource while achieving comparable results with the leading
GCN-based models. Specifically, by ranking each node's influence through
degree, Node2Grids selects the most influential first-order as well as
second-order neighbors with central node fusion information to construct the
grid-like data. For further improving the efficiency of downstream tasks, a
simple CNN-based neural network is employed to capture the significant
information from the mapped grid-like data. Moreover, the grid-level attention
mechanism is implemented, which enables implicitly specifying the different
weights for neighboring nodes with different influences. In addition to the
typical transductive and inductive learning tasks, we also verify our framework
on million-scale graphs to demonstrate the superiority of the proposed
Node2Grids model against the state-of-the-art GCN-based approaches.
| [
{
"created": "Sat, 21 Mar 2020 11:49:16 GMT",
"version": "v1"
},
{
"created": "Wed, 22 Jul 2020 03:32:29 GMT",
"version": "v2"
}
] | 2020-07-23 | [
[
"Yang",
"Dalong",
""
],
[
"Chen",
"Chuan",
""
],
[
"Zheng",
"Youhao",
""
],
[
"Zheng",
"Zibin",
""
],
[
"Liao",
"Shih-wei",
""
]
] | Graph Convolutional Network (GCN) has been widely used in graph learning tasks. However, GCN-based models (GCNs) is an inherently coupled training framework repetitively conducting the complex neighboring aggregation, which leads to the limitation of flexibility in processing large-scale graph. With the depth of layers increases, the computational and memory cost of GCNs grow explosively due to the recursive neighborhood expansion. To tackle these issues, we present Node2Grids, a flexible uncoupled training framework that leverages the independent mapped data for obtaining the embedding. Instead of directly processing the coupled nodes as GCNs, Node2Grids supports a more efficacious method in practice, mapping the coupled graph data into the independent grid-like data which can be fed into the efficient Convolutional Neural Network (CNN). This simple but valid strategy significantly saves memory and computational resource while achieving comparable results with the leading GCN-based models. Specifically, by ranking each node's influence through degree, Node2Grids selects the most influential first-order as well as second-order neighbors with central node fusion information to construct the grid-like data. For further improving the efficiency of downstream tasks, a simple CNN-based neural network is employed to capture the significant information from the mapped grid-like data. Moreover, the grid-level attention mechanism is implemented, which enables implicitly specifying the different weights for neighboring nodes with different influences. In addition to the typical transductive and inductive learning tasks, we also verify our framework on million-scale graphs to demonstrate the superiority of the proposed Node2Grids model against the state-of-the-art GCN-based approaches. |
1702.06398 | Nitish Prajapati | Ayub Khan, Dinesh Khattar, Nitish Prajapati | Dual combination combination multi switching synchronization of eight
chaotic systems | 19 pages, 7 figures | null | 10.1016/j.cjph.2017.06.002 | null | cs.SY math.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, a novel scheme for synchronizing four drive and four response
systems is proposed by the authors. The idea of multi switching and dual
combination synchronization is extended to dual combination-combination multi
switching synchronization involving eight chaotic systems and is a first of its
kind. Due to the multiple combination of chaotic systems and multi switching
the resultant dynamic behaviour is so complex that, in communication theory,
transmission and security of the resultant signal is more effective. Using
Lyapunov stability theory, sufficient conditions are achieved and suitable
controllers are designed to realise the desired synchronization. Corresponding
theoretical analysis is presented and numerical simulations performed to
demonstrate the effectiveness of the proposed scheme.
| [
{
"created": "Fri, 17 Feb 2017 11:32:47 GMT",
"version": "v1"
}
] | 2017-08-02 | [
[
"Khan",
"Ayub",
""
],
[
"Khattar",
"Dinesh",
""
],
[
"Prajapati",
"Nitish",
""
]
] | In this paper, a novel scheme for synchronizing four drive and four response systems is proposed by the authors. The idea of multi switching and dual combination synchronization is extended to dual combination-combination multi switching synchronization involving eight chaotic systems and is a first of its kind. Due to the multiple combination of chaotic systems and multi switching the resultant dynamic behaviour is so complex that, in communication theory, transmission and security of the resultant signal is more effective. Using Lyapunov stability theory, sufficient conditions are achieved and suitable controllers are designed to realise the desired synchronization. Corresponding theoretical analysis is presented and numerical simulations performed to demonstrate the effectiveness of the proposed scheme. |
2108.11309 | Robin Haunschild | Robin Haunschild and Lutz Bornmann | Report on Workshop III "Cited References Analysis Using CRExplorer" at
the 18th International Conference of the International Society for
Scientometrics and Informetrics (ISSI2021) | 6 pages, 1 figure | null | null | null | cs.DL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We have organized Workshop III entitled "Cited References Analysis Using
CRExplorer" at ISSI2021. Here, we report and reflect on this workshop. The aim
of this workshop was to bring beginners, practitioners, and experts in cited
references analyses together. A mixture of presentations and an interactive
part was intended to provide benefits for all kinds of scientometricians with
an interest in cited references analyses.
| [
{
"created": "Wed, 25 Aug 2021 16:23:26 GMT",
"version": "v1"
}
] | 2021-08-26 | [
[
"Haunschild",
"Robin",
""
],
[
"Bornmann",
"Lutz",
""
]
] | We have organized Workshop III entitled "Cited References Analysis Using CRExplorer" at ISSI2021. Here, we report and reflect on this workshop. The aim of this workshop was to bring beginners, practitioners, and experts in cited references analyses together. A mixture of presentations and an interactive part was intended to provide benefits for all kinds of scientometricians with an interest in cited references analyses. |
1907.07509 | Guillaume Noyel | Guillaume Noyel (IPRI, SIGPH@iPRI) | A Link Between the Multiplicative and Additive Functional Asplund's
Metrics | null | 14th International Symposium on Mathematical Morphology, Saarland
University, Jul 2019, Saarbr\"ucken, Germany. pp.41-53 | 10.1007/978-3-030-20867-7_4 | null | cs.CV math.FA | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Functional Asplund's metrics were recently introduced to perform pattern
matching robust to lighting changes thanks to double-sided probing in the
Logarithmic Image Processing (LIP) framework. Two metrics were defined, namely
the LIP-multiplicative Asplund's metric which is robust to variations of object
thickness (or opacity) and the LIP-additive Asplund's metric which is robust to
variations of camera exposure-time (or light intensity). Maps of distances-i.e.
maps of these metric values-were also computed between a reference template and
an image. Recently, it was proven that the map of LIP-multiplicative As-plund's
distances corresponds to mathematical morphology operations. In this paper, the
link between both metrics and between their corresponding distance maps will be
demonstrated. It will be shown that the map of LIP-additive Asplund's distances
of an image can be computed from the map of the LIP-multiplicative Asplund's
distance of a transform of this image and vice-versa. Both maps will be related
by the LIP isomorphism which will allow to pass from the image space of the
LIP-additive distance map to the positive real function space of the
LIP-multiplicative distance map. Experiments will illustrate this relation and
the robustness of the LIP-additive Asplund's metric to lighting changes.
| [
{
"created": "Wed, 17 Jul 2019 13:32:58 GMT",
"version": "v1"
}
] | 2019-07-18 | [
[
"Noyel",
"Guillaume",
"",
"IPRI, SIGPH@iPRI"
]
] | Functional Asplund's metrics were recently introduced to perform pattern matching robust to lighting changes thanks to double-sided probing in the Logarithmic Image Processing (LIP) framework. Two metrics were defined, namely the LIP-multiplicative Asplund's metric which is robust to variations of object thickness (or opacity) and the LIP-additive Asplund's metric which is robust to variations of camera exposure-time (or light intensity). Maps of distances-i.e. maps of these metric values-were also computed between a reference template and an image. Recently, it was proven that the map of LIP-multiplicative As-plund's distances corresponds to mathematical morphology operations. In this paper, the link between both metrics and between their corresponding distance maps will be demonstrated. It will be shown that the map of LIP-additive Asplund's distances of an image can be computed from the map of the LIP-multiplicative Asplund's distance of a transform of this image and vice-versa. Both maps will be related by the LIP isomorphism which will allow to pass from the image space of the LIP-additive distance map to the positive real function space of the LIP-multiplicative distance map. Experiments will illustrate this relation and the robustness of the LIP-additive Asplund's metric to lighting changes. |
2112.10250 | Palash Dey | Arnab Maiti, Palash Dey | Parameterized Algorithms for Kidney Exchange | 20 pages, appeared as an extended abstract in AAMAS 2022 and full
paper in IJCAI 2022 | null | null | null | cs.GT cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In kidney exchange programs, multiple patient-donor pairs each of whom are
otherwise incompatible, exchange their donors to receive compatible kidneys.
The Kidney Exchange problem is typically modelled as a directed graph where
every vertex is either an altruistic donor or a pair of patient and donor;
directed edges are added from a donor to its compatible patients. The
computational task is to find if there exists a collection of disjoint cycles
and paths starting from altruistic donor vertices of length at most l_c and l_p
respectively that covers at least some specific number t of non-altruistic
vertices (patients). We study parameterized algorithms for the kidney exchange
problem in this paper. Specifically, we design FPT algorithms parameterized by
each of the following parameters: (1) the number of patients who receive
kidney, (2) treewidth of the input graph + max{l_p, l_c}, and (3) the number of
vertex types in the input graph when l_p <= l_c. We also present interesting
algorithmic and hardness results on the kernelization complexity of the
problem. Finally, we present an approximation algorithm for an important
special case of Kidney Exchange.
| [
{
"created": "Sun, 19 Dec 2021 20:32:35 GMT",
"version": "v1"
},
{
"created": "Fri, 14 Jun 2024 10:02:37 GMT",
"version": "v2"
}
] | 2024-06-17 | [
[
"Maiti",
"Arnab",
""
],
[
"Dey",
"Palash",
""
]
] | In kidney exchange programs, multiple patient-donor pairs each of whom are otherwise incompatible, exchange their donors to receive compatible kidneys. The Kidney Exchange problem is typically modelled as a directed graph where every vertex is either an altruistic donor or a pair of patient and donor; directed edges are added from a donor to its compatible patients. The computational task is to find if there exists a collection of disjoint cycles and paths starting from altruistic donor vertices of length at most l_c and l_p respectively that covers at least some specific number t of non-altruistic vertices (patients). We study parameterized algorithms for the kidney exchange problem in this paper. Specifically, we design FPT algorithms parameterized by each of the following parameters: (1) the number of patients who receive kidney, (2) treewidth of the input graph + max{l_p, l_c}, and (3) the number of vertex types in the input graph when l_p <= l_c. We also present interesting algorithmic and hardness results on the kernelization complexity of the problem. Finally, we present an approximation algorithm for an important special case of Kidney Exchange. |
2206.09004 | Franz Mayr | Franz Mayr, Sergio Yovine, Federico Pan, Nicolas Basset, Thao Dang | Towards Efficient Active Learning of PDFA | 11 pages, 7 figures, workshop paper | null | null | null | cs.FL cs.AI cs.LG | http://creativecommons.org/licenses/by-sa/4.0/ | We propose a new active learning algorithm for PDFA based on three main
aspects: a congruence over states which takes into account next-symbol
probability distributions, a quantization that copes with differences in
distributions, and an efficient tree-based data structure. Experiments showed
significant performance gains with respect to reference implementations.
| [
{
"created": "Fri, 17 Jun 2022 20:48:58 GMT",
"version": "v1"
}
] | 2022-06-22 | [
[
"Mayr",
"Franz",
""
],
[
"Yovine",
"Sergio",
""
],
[
"Pan",
"Federico",
""
],
[
"Basset",
"Nicolas",
""
],
[
"Dang",
"Thao",
""
]
] | We propose a new active learning algorithm for PDFA based on three main aspects: a congruence over states which takes into account next-symbol probability distributions, a quantization that copes with differences in distributions, and an efficient tree-based data structure. Experiments showed significant performance gains with respect to reference implementations. |
2402.12315 | Xinran Wang | Xinran Wang and Nicolas Rojas | Cosserat Rod Modeling and Validation for a Soft Continuum Robot with
Self-Controllable Variable Curvature | Accepted for IEEE RoboSoft Conference 2024, April 14-17 | null | 10.1109/RoboSoft60065.2024.10522028 | null | cs.RO | http://creativecommons.org/licenses/by/4.0/ | This paper introduces a Cosserat rod based mathematical model for modeling a
self-controllable variable curvature soft continuum robot. This soft continuum
robot has a hollow inner channel and was developed with the ability to perform
variable curvature utilizing a growing spine. The growing spine is able to grow
and retract while modifies its stiffness through milli-size particle (glass
bubble) granular jamming. This soft continuum robot can then perform continuous
curvature variation, unlike previous approaches whose curvature variation is
discrete and depends on the number of locking mechanisms or manual
configurations. The robot poses an emergent modeling problem due to the
variable stiffness growing spine which is addressed in this paper. We
investigate the property of growing spine stiffness and incorporate it into the
Cosserat rod model by implementing a combined stiffness approach. We conduct
experiments with the soft continuum robot in various configurations and
compared the results with our developed mathematical model. The results show
that the mathematical model based on the adapted Cosserat rod matches the
experimental results with only a 3.3\% error with respect to the length of the
soft continuum robot.
| [
{
"created": "Mon, 19 Feb 2024 17:37:11 GMT",
"version": "v1"
},
{
"created": "Thu, 27 Jun 2024 15:13:09 GMT",
"version": "v2"
}
] | 2024-06-28 | [
[
"Wang",
"Xinran",
""
],
[
"Rojas",
"Nicolas",
""
]
] | This paper introduces a Cosserat rod based mathematical model for modeling a self-controllable variable curvature soft continuum robot. This soft continuum robot has a hollow inner channel and was developed with the ability to perform variable curvature utilizing a growing spine. The growing spine is able to grow and retract while modifies its stiffness through milli-size particle (glass bubble) granular jamming. This soft continuum robot can then perform continuous curvature variation, unlike previous approaches whose curvature variation is discrete and depends on the number of locking mechanisms or manual configurations. The robot poses an emergent modeling problem due to the variable stiffness growing spine which is addressed in this paper. We investigate the property of growing spine stiffness and incorporate it into the Cosserat rod model by implementing a combined stiffness approach. We conduct experiments with the soft continuum robot in various configurations and compared the results with our developed mathematical model. The results show that the mathematical model based on the adapted Cosserat rod matches the experimental results with only a 3.3\% error with respect to the length of the soft continuum robot. |
0806.3799 | Sina Jafarpour | Robert Calderbank, Stephen Howard, Sina Jafarpour | A Sublinear Algorithm for Sparse Reconstruction with l2/l2 Recovery
Guarantees | null | null | null | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Compressed Sensing aims to capture attributes of a sparse signal using very
few measurements. Cand\`{e}s and Tao showed that sparse reconstruction is
possible if the sensing matrix acts as a near isometry on all
$\boldsymbol{k}$-sparse signals. This property holds with overwhelming
probability if the entries of the matrix are generated by an iid Gaussian or
Bernoulli process. There has been significant recent interest in an alternative
signal processing framework; exploiting deterministic sensing matrices that
with overwhelming probability act as a near isometry on $\boldsymbol{k}$-sparse
vectors with uniformly random support, a geometric condition that is called the
Statistical Restricted Isometry Property or StRIP. This paper considers a
family of deterministic sensing matrices satisfying the StRIP that are based on
\srm codes (binary chirps) and a $\boldsymbol{k}$-sparse reconstruction
algorithm with sublinear complexity. In the presence of stochastic noise in the
data domain, this paper derives bounds on the $\boldsymbol{\ell_2}$ accuracy of
approximation in terms of the $\boldsymbol{\ell_2}$ norm of the measurement
noise and the accuracy of the best $\boldsymbol{k}$-sparse approximation, also
measured in the $\boldsymbol{\ell_2}$ norm. This type of $\boldsymbol{\ell_2
/\ell_2}$ bound is tighter than the standard $\boldsymbol{\ell_2 /\ell_1}$ or
$\boldsymbol{\ell_1/ \ell_1}$ bounds.
| [
{
"created": "Tue, 24 Jun 2008 02:16:08 GMT",
"version": "v1"
},
{
"created": "Sat, 17 Oct 2009 22:54:03 GMT",
"version": "v2"
}
] | 2009-10-18 | [
[
"Calderbank",
"Robert",
""
],
[
"Howard",
"Stephen",
""
],
[
"Jafarpour",
"Sina",
""
]
] | Compressed Sensing aims to capture attributes of a sparse signal using very few measurements. Cand\`{e}s and Tao showed that sparse reconstruction is possible if the sensing matrix acts as a near isometry on all $\boldsymbol{k}$-sparse signals. This property holds with overwhelming probability if the entries of the matrix are generated by an iid Gaussian or Bernoulli process. There has been significant recent interest in an alternative signal processing framework; exploiting deterministic sensing matrices that with overwhelming probability act as a near isometry on $\boldsymbol{k}$-sparse vectors with uniformly random support, a geometric condition that is called the Statistical Restricted Isometry Property or StRIP. This paper considers a family of deterministic sensing matrices satisfying the StRIP that are based on \srm codes (binary chirps) and a $\boldsymbol{k}$-sparse reconstruction algorithm with sublinear complexity. In the presence of stochastic noise in the data domain, this paper derives bounds on the $\boldsymbol{\ell_2}$ accuracy of approximation in terms of the $\boldsymbol{\ell_2}$ norm of the measurement noise and the accuracy of the best $\boldsymbol{k}$-sparse approximation, also measured in the $\boldsymbol{\ell_2}$ norm. This type of $\boldsymbol{\ell_2 /\ell_2}$ bound is tighter than the standard $\boldsymbol{\ell_2 /\ell_1}$ or $\boldsymbol{\ell_1/ \ell_1}$ bounds. |
1806.08771 | David Feller | David Feller, Joe B. Wells, Fairouz Kamareddine (ULTRA), Sebastien
Carlier | What Does This Notation Mean Anyway? | null | null | null | null | cs.LO cs.FL cs.PL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Following the introduction of BNF notation by Backus for the Algol 60 report
and subsequent notational variants, a metalanguage involving formal "grammars"
has developed for discussing structured objects in Computer Science and
Mathematical Logic. We refer to this offspring of BNF as Math-BNF or MBNF, to
the original BNF and its notational variants just as BNF, and to aspects common
to both as BNF-style. What all BNF-style notations share is the use of
production rules roughly of this form: $$\bullet \mathrel{::=} \circ_1 \mid
\cdots \mid \circ_n $$ Normally, such a rule says "every instance of $\circ_i$
for $i \in \{1, \ldots, n\}$ is also an in stance of $\bullet$". MBNF is
distinct from BNF in the entities and operations it allows. Instead of strings,
MBNF builds arrangements of symbols that we call math-text. Sometimes "syntax"
is defined by interleaving MBNF production rules and other mathematical
definitions that can contain chunks of math-text. There is no clear definition
of MBNF. Readers do not have a document which tells them how MBNF is to be read
and must learn MBNF through a process of cultural initiation. To the extent
that MBNF is defined, it is largely through examples scattered throughout the
literature and which require readers to guess the mathematical structures
underpinning them. This paper gives MBNF examples illustrating some of the
differences between MBNF and BNF. We propose a definition of syntactic math
text (SMT) which handles many (but far from all) uses of math-text and MBNF in
the wild. We aim to balance the goal of being accessible and not requiring too
much prerequisite knowledge with the conflicting goal of providing a rich
mathematical structure that already supports many uses and has possibilities to
be extended to support more challenging cases.
| [
{
"created": "Tue, 12 Jun 2018 13:07:15 GMT",
"version": "v1"
}
] | 2018-06-25 | [
[
"Feller",
"David",
"",
"ULTRA"
],
[
"Wells",
"Joe B.",
"",
"ULTRA"
],
[
"Kamareddine",
"Fairouz",
"",
"ULTRA"
],
[
"Carlier",
"Sebastien",
""
]
] | Following the introduction of BNF notation by Backus for the Algol 60 report and subsequent notational variants, a metalanguage involving formal "grammars" has developed for discussing structured objects in Computer Science and Mathematical Logic. We refer to this offspring of BNF as Math-BNF or MBNF, to the original BNF and its notational variants just as BNF, and to aspects common to both as BNF-style. What all BNF-style notations share is the use of production rules roughly of this form: $$\bullet \mathrel{::=} \circ_1 \mid \cdots \mid \circ_n $$ Normally, such a rule says "every instance of $\circ_i$ for $i \in \{1, \ldots, n\}$ is also an in stance of $\bullet$". MBNF is distinct from BNF in the entities and operations it allows. Instead of strings, MBNF builds arrangements of symbols that we call math-text. Sometimes "syntax" is defined by interleaving MBNF production rules and other mathematical definitions that can contain chunks of math-text. There is no clear definition of MBNF. Readers do not have a document which tells them how MBNF is to be read and must learn MBNF through a process of cultural initiation. To the extent that MBNF is defined, it is largely through examples scattered throughout the literature and which require readers to guess the mathematical structures underpinning them. This paper gives MBNF examples illustrating some of the differences between MBNF and BNF. We propose a definition of syntactic math text (SMT) which handles many (but far from all) uses of math-text and MBNF in the wild. We aim to balance the goal of being accessible and not requiring too much prerequisite knowledge with the conflicting goal of providing a rich mathematical structure that already supports many uses and has possibilities to be extended to support more challenging cases. |
1607.08539 | Maciej Halber | Maciej Halber and Thomas Funkhouser | Fine-To-Coarse Global Registration of RGB-D Scans | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | RGB-D scanning of indoor environments is important for many applications,
including real estate, interior design, and virtual reality. However, it is
still challenging to register RGB-D images from a hand-held camera over a long
video sequence into a globally consistent 3D model. Current methods often can
lose tracking or drift and thus fail to reconstruct salient structures in large
environments (e.g., parallel walls in different rooms). To address this
problem, we propose a "fine-to-coarse" global registration algorithm that
leverages robust registrations at finer scales to seed detection and
enforcement of new correspondence and structural constraints at coarser scales.
To test global registration algorithms, we provide a benchmark with 10,401
manually-clicked point correspondences in 25 scenes from the SUN3D dataset.
During experiments with this benchmark, we find that our fine-to-coarse
algorithm registers long RGB-D sequences better than previous methods.
| [
{
"created": "Thu, 28 Jul 2016 17:19:46 GMT",
"version": "v1"
},
{
"created": "Mon, 1 Aug 2016 15:59:00 GMT",
"version": "v2"
},
{
"created": "Wed, 23 Nov 2016 04:55:29 GMT",
"version": "v3"
}
] | 2016-11-24 | [
[
"Halber",
"Maciej",
""
],
[
"Funkhouser",
"Thomas",
""
]
] | RGB-D scanning of indoor environments is important for many applications, including real estate, interior design, and virtual reality. However, it is still challenging to register RGB-D images from a hand-held camera over a long video sequence into a globally consistent 3D model. Current methods often can lose tracking or drift and thus fail to reconstruct salient structures in large environments (e.g., parallel walls in different rooms). To address this problem, we propose a "fine-to-coarse" global registration algorithm that leverages robust registrations at finer scales to seed detection and enforcement of new correspondence and structural constraints at coarser scales. To test global registration algorithms, we provide a benchmark with 10,401 manually-clicked point correspondences in 25 scenes from the SUN3D dataset. During experiments with this benchmark, we find that our fine-to-coarse algorithm registers long RGB-D sequences better than previous methods. |
2406.13873 | Yu Song | Yu Song, Haitao Mao, Jiachen Xiao, Jingzhe Liu, Zhikai Chen, Wei Jin,
Carl Yang, Jiliang Tang, Hui Liu | A Pure Transformer Pretraining Framework on Text-attributed Graphs | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Pretraining plays a pivotal role in acquiring generalized knowledge from
large-scale data, achieving remarkable successes as evidenced by large models
in CV and NLP. However, progress in the graph domain remains limited due to
fundamental challenges such as feature heterogeneity and structural
heterogeneity. Recently, increasing efforts have been made to enhance node
feature quality with Large Language Models (LLMs) on text-attributed graphs
(TAGs), demonstrating superiority to traditional bag-of-words or word2vec
techniques. These high-quality node features reduce the previously critical
role of graph structure, resulting in a modest performance gap between Graph
Neural Networks (GNNs) and structure-agnostic Multi-Layer Perceptrons (MLPs).
Motivated by this, we introduce a feature-centric pretraining perspective by
treating graph structure as a prior and leveraging the rich, unified feature
space to learn refined interaction patterns that generalizes across graphs. Our
framework, Graph Sequence Pretraining with Transformer (GSPT), samples node
contexts through random walks and employs masked feature reconstruction to
capture pairwise proximity in the LLM-unified feature space using a standard
Transformer. By utilizing unified text representations rather than varying
structures, our framework achieves significantly better transferability among
graphs within the same domain. GSPT can be easily adapted to both node
classification and link prediction, demonstrating promising empirical success
on various datasets.
| [
{
"created": "Wed, 19 Jun 2024 22:30:08 GMT",
"version": "v1"
}
] | 2024-06-21 | [
[
"Song",
"Yu",
""
],
[
"Mao",
"Haitao",
""
],
[
"Xiao",
"Jiachen",
""
],
[
"Liu",
"Jingzhe",
""
],
[
"Chen",
"Zhikai",
""
],
[
"Jin",
"Wei",
""
],
[
"Yang",
"Carl",
""
],
[
"Tang",
"Jiliang",
""
],
[
"Liu",
"Hui",
""
]
] | Pretraining plays a pivotal role in acquiring generalized knowledge from large-scale data, achieving remarkable successes as evidenced by large models in CV and NLP. However, progress in the graph domain remains limited due to fundamental challenges such as feature heterogeneity and structural heterogeneity. Recently, increasing efforts have been made to enhance node feature quality with Large Language Models (LLMs) on text-attributed graphs (TAGs), demonstrating superiority to traditional bag-of-words or word2vec techniques. These high-quality node features reduce the previously critical role of graph structure, resulting in a modest performance gap between Graph Neural Networks (GNNs) and structure-agnostic Multi-Layer Perceptrons (MLPs). Motivated by this, we introduce a feature-centric pretraining perspective by treating graph structure as a prior and leveraging the rich, unified feature space to learn refined interaction patterns that generalizes across graphs. Our framework, Graph Sequence Pretraining with Transformer (GSPT), samples node contexts through random walks and employs masked feature reconstruction to capture pairwise proximity in the LLM-unified feature space using a standard Transformer. By utilizing unified text representations rather than varying structures, our framework achieves significantly better transferability among graphs within the same domain. GSPT can be easily adapted to both node classification and link prediction, demonstrating promising empirical success on various datasets. |
1904.07451 | Yash Goyal | Yash Goyal and Ziyan Wu and Jan Ernst and Dhruv Batra and Devi Parikh
and Stefan Lee | Counterfactual Visual Explanations | null | null | null | null | cs.LG cs.AI cs.CV stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work, we develop a technique to produce counterfactual visual
explanations. Given a 'query' image $I$ for which a vision system predicts
class $c$, a counterfactual visual explanation identifies how $I$ could change
such that the system would output a different specified class $c'$. To do this,
we select a 'distractor' image $I'$ that the system predicts as class $c'$ and
identify spatial regions in $I$ and $I'$ such that replacing the identified
region in $I$ with the identified region in $I'$ would push the system towards
classifying $I$ as $c'$. We apply our approach to multiple image classification
datasets generating qualitative results showcasing the interpretability and
discriminativeness of our counterfactual explanations. To explore the
effectiveness of our explanations in teaching humans, we present machine
teaching experiments for the task of fine-grained bird classification. We find
that users trained to distinguish bird species fare better when given access to
counterfactual explanations in addition to training examples.
| [
{
"created": "Tue, 16 Apr 2019 04:16:11 GMT",
"version": "v1"
},
{
"created": "Tue, 11 Jun 2019 16:49:55 GMT",
"version": "v2"
}
] | 2019-06-12 | [
[
"Goyal",
"Yash",
""
],
[
"Wu",
"Ziyan",
""
],
[
"Ernst",
"Jan",
""
],
[
"Batra",
"Dhruv",
""
],
[
"Parikh",
"Devi",
""
],
[
"Lee",
"Stefan",
""
]
] | In this work, we develop a technique to produce counterfactual visual explanations. Given a 'query' image $I$ for which a vision system predicts class $c$, a counterfactual visual explanation identifies how $I$ could change such that the system would output a different specified class $c'$. To do this, we select a 'distractor' image $I'$ that the system predicts as class $c'$ and identify spatial regions in $I$ and $I'$ such that replacing the identified region in $I$ with the identified region in $I'$ would push the system towards classifying $I$ as $c'$. We apply our approach to multiple image classification datasets generating qualitative results showcasing the interpretability and discriminativeness of our counterfactual explanations. To explore the effectiveness of our explanations in teaching humans, we present machine teaching experiments for the task of fine-grained bird classification. We find that users trained to distinguish bird species fare better when given access to counterfactual explanations in addition to training examples. |
1309.6838 | Jean Honorio | Jean Honorio, Tommi S. Jaakkola | Inverse Covariance Estimation for High-Dimensional Data in Linear Time
and Space: Spectral Methods for Riccati and Sparse Models | Appears in Proceedings of the Twenty-Ninth Conference on Uncertainty
in Artificial Intelligence (UAI2013) | Uncertainty in Artificial Intelligence (UAI), 2013 | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose maximum likelihood estimation for learning Gaussian graphical
models with a Gaussian (ell_2^2) prior on the parameters. This is in contrast
to the commonly used Laplace (ell_1) prior for encouraging sparseness. We show
that our optimization problem leads to a Riccati matrix equation, which has a
closed form solution. We propose an efficient algorithm that performs a
singular value decomposition of the training data. Our algorithm is
O(NT^2)-time and O(NT)-space for N variables and T samples. Our method is
tailored to high-dimensional problems (N gg T), in which sparseness promoting
methods become intractable. Furthermore, instead of obtaining a single solution
for a specific regularization parameter, our algorithm finds the whole solution
path. We show that the method has logarithmic sample complexity under the
spiked covariance model. We also propose sparsification of the dense solution
with provable performance guarantees. We provide techniques for using our
learnt models, such as removing unimportant variables, computing likelihoods
and conditional distributions. Finally, we show promising results in several
gene expressions datasets.
| [
{
"created": "Thu, 26 Sep 2013 12:41:38 GMT",
"version": "v1"
}
] | 2018-11-16 | [
[
"Honorio",
"Jean",
""
],
[
"Jaakkola",
"Tommi S.",
""
]
] | We propose maximum likelihood estimation for learning Gaussian graphical models with a Gaussian (ell_2^2) prior on the parameters. This is in contrast to the commonly used Laplace (ell_1) prior for encouraging sparseness. We show that our optimization problem leads to a Riccati matrix equation, which has a closed form solution. We propose an efficient algorithm that performs a singular value decomposition of the training data. Our algorithm is O(NT^2)-time and O(NT)-space for N variables and T samples. Our method is tailored to high-dimensional problems (N gg T), in which sparseness promoting methods become intractable. Furthermore, instead of obtaining a single solution for a specific regularization parameter, our algorithm finds the whole solution path. We show that the method has logarithmic sample complexity under the spiked covariance model. We also propose sparsification of the dense solution with provable performance guarantees. We provide techniques for using our learnt models, such as removing unimportant variables, computing likelihoods and conditional distributions. Finally, we show promising results in several gene expressions datasets. |
1408.0845 | Tao Zhou | Jing Zhao, Lili Miao, Haiyang Fang, Qian-Ming Zhang, Min Nie, Tao Zhou | Predicting missing links and their weights via reliable-route-based
method | 5 pages, 4 tables | Scientific Reports 5 (2015) 12261 | 10.1038/srep12261 | null | cs.SI cs.IR physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Link prediction aims to uncover missing links or predict the emergence of
future relationships according to the current networks structure. Plenty of
algorithms have been developed for link prediction in unweighted networks, with
only a very few of them having been extended to weighted networks. Thus far,
how to predict weights of links is important but rarely studied. In this
Letter, we present a reliable-route-based method to extend unweighted local
similarity indices to weighted indices and propose a method to predict both the
link existence and link weights accordingly. Experiments on different real
networks suggest that the weighted resource allocation index has the best
performance to predict the existence of links, while the reliable-route-based
weighted resource allocation index performs noticeably better on weight
prediction. Further analysis shows a strong correlation for both link
prediction and weight prediction: the larger the clustering coefficient, the
higher the prediction accuracy.
| [
{
"created": "Tue, 5 Aug 2014 01:07:41 GMT",
"version": "v1"
}
] | 2015-09-22 | [
[
"Zhao",
"Jing",
""
],
[
"Miao",
"Lili",
""
],
[
"Fang",
"Haiyang",
""
],
[
"Zhang",
"Qian-Ming",
""
],
[
"Nie",
"Min",
""
],
[
"Zhou",
"Tao",
""
]
] | Link prediction aims to uncover missing links or predict the emergence of future relationships according to the current networks structure. Plenty of algorithms have been developed for link prediction in unweighted networks, with only a very few of them having been extended to weighted networks. Thus far, how to predict weights of links is important but rarely studied. In this Letter, we present a reliable-route-based method to extend unweighted local similarity indices to weighted indices and propose a method to predict both the link existence and link weights accordingly. Experiments on different real networks suggest that the weighted resource allocation index has the best performance to predict the existence of links, while the reliable-route-based weighted resource allocation index performs noticeably better on weight prediction. Further analysis shows a strong correlation for both link prediction and weight prediction: the larger the clustering coefficient, the higher the prediction accuracy. |
1911.05702 | Tong Wang | Tong Wang and Fujie Jin and Yu Hu and Yuan Cheng | Early Predictions for Medical Crowdfunding: A Deep Learning Approach
Using Diverse Inputs | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Medical crowdfunding is a popular channel for people needing financial help
paying medical bills to collect donations from large numbers of people.
However, large heterogeneity exists in donations across cases, and fundraisers
face significant uncertainty in whether their crowdfunding campaigns can meet
fundraising goals. Therefore, it is important to provide early warnings for
fundraisers if such a channel will eventually fail. In this study, we aim to
develop novel algorithms to provide accurate and timely predictions of
fundraising performance, to better inform fundraisers. In particular, we
propose a new approach to combine time-series features and time-invariant
features in the deep learning model, to process diverse sources of input data.
Compared with baseline models, our model achieves better accuracy and requires
a shorter observation window of the time-varying features from the campaign
launch to provide robust predictions with high confidence. To extract
interpretable insights, we further conduct a multivariate time-series
clustering analysis and identify four typical temporal donation patterns. This
demonstrates the heterogeneity in the features and how they relate to the
fundraising outcome. The prediction model and the interpretable insights can be
applied to assist fundraisers with better promoting their fundraising campaigns
and can potentially help crowdfunding platforms to provide more timely feedback
to all fundraisers. Our proposed framework is also generalizable to other
fields where diverse structured and unstructured data are valuable for
predictions.
| [
{
"created": "Sat, 9 Nov 2019 06:08:10 GMT",
"version": "v1"
}
] | 2019-11-25 | [
[
"Wang",
"Tong",
""
],
[
"Jin",
"Fujie",
""
],
[
"Hu",
"Yu",
""
],
[
"Cheng",
"Yuan",
""
]
] | Medical crowdfunding is a popular channel for people needing financial help paying medical bills to collect donations from large numbers of people. However, large heterogeneity exists in donations across cases, and fundraisers face significant uncertainty in whether their crowdfunding campaigns can meet fundraising goals. Therefore, it is important to provide early warnings for fundraisers if such a channel will eventually fail. In this study, we aim to develop novel algorithms to provide accurate and timely predictions of fundraising performance, to better inform fundraisers. In particular, we propose a new approach to combine time-series features and time-invariant features in the deep learning model, to process diverse sources of input data. Compared with baseline models, our model achieves better accuracy and requires a shorter observation window of the time-varying features from the campaign launch to provide robust predictions with high confidence. To extract interpretable insights, we further conduct a multivariate time-series clustering analysis and identify four typical temporal donation patterns. This demonstrates the heterogeneity in the features and how they relate to the fundraising outcome. The prediction model and the interpretable insights can be applied to assist fundraisers with better promoting their fundraising campaigns and can potentially help crowdfunding platforms to provide more timely feedback to all fundraisers. Our proposed framework is also generalizable to other fields where diverse structured and unstructured data are valuable for predictions. |
1604.03888 | Mohammad Mohammadi Amiri Mr. | Mohammad Mohammadi Amiri and Deniz Gunduz | Fundamental Limits of Coded Caching: Improved Delivery Rate-Cache
Capacity Trade-off | To appear in IEEE Transactions on Communications | null | null | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A centralized coded caching system, consisting of a server delivering N
popular files, each of size F bits, to K users through an error-free shared
link, is considered. It is assumed that each user is equipped with a local
cache memory with capacity MF bits, and contents can be proactively cached into
these caches over a low traffic period; however, without the knowledge of the
user demands. During the peak traffic period each user requests a single file
from the server. The goal is to minimize the number of bits delivered by the
server over the shared link, known as the delivery rate, over all user demand
combinations. A novel coded caching scheme for the cache capacity of M= (N-1)/K
is proposed. It is shown that the proposed scheme achieves a smaller delivery
rate than the existing coded caching schemes in the literature when K > N >= 3.
Furthermore, we argue that the delivery rate of the proposed scheme is within a
constant multiplicative factor of 2 of the optimal delivery rate for cache
capacities 1/K <= M <= (N-1)/K, when K > N >= 3.
| [
{
"created": "Wed, 13 Apr 2016 17:55:03 GMT",
"version": "v1"
},
{
"created": "Tue, 13 Dec 2016 22:12:24 GMT",
"version": "v2"
}
] | 2016-12-15 | [
[
"Amiri",
"Mohammad Mohammadi",
""
],
[
"Gunduz",
"Deniz",
""
]
] | A centralized coded caching system, consisting of a server delivering N popular files, each of size F bits, to K users through an error-free shared link, is considered. It is assumed that each user is equipped with a local cache memory with capacity MF bits, and contents can be proactively cached into these caches over a low traffic period; however, without the knowledge of the user demands. During the peak traffic period each user requests a single file from the server. The goal is to minimize the number of bits delivered by the server over the shared link, known as the delivery rate, over all user demand combinations. A novel coded caching scheme for the cache capacity of M= (N-1)/K is proposed. It is shown that the proposed scheme achieves a smaller delivery rate than the existing coded caching schemes in the literature when K > N >= 3. Furthermore, we argue that the delivery rate of the proposed scheme is within a constant multiplicative factor of 2 of the optimal delivery rate for cache capacities 1/K <= M <= (N-1)/K, when K > N >= 3. |
2003.00953 | Yueting Chen | Yueting Chen and Xiaohui Yu and Nick Koudas | Evaluating Temporal Queries Over Video Feeds | null | null | null | null | cs.DB cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent advances in Computer Vision and Deep Learning made possible the
efficient extraction of a schema from frames of streaming video. As such, a
stream of objects and their associated classes along with unique object
identifiers derived via object tracking can be generated, providing unique
objects as they are captured across frames. In this paper we initiate a study
of temporal queries involving objects and their co-occurrences in video feeds.
For example, queries that identify video segments during which the same two red
cars and the same two humans appear jointly for five minutes are of interest to
many applications ranging from law enforcement to security and safety. We take
the first step and define such queries in a way that they incorporate certain
physical aspects of video capture such as object occlusion. We present an
architecture consisting of three layers, namely object detection/tracking,
intermediate data generation and query evaluation. We propose two
techniques,MFS and SSG, to organize all detected objects in the intermediate
data generation layer, which effectively, given the queries, minimizes the
number of objects and frames that have to be considered during query
evaluation. We also introduce an algorithm called State Traversal (ST) that
processes incoming frames against the SSG and efficiently prunes objects and
frames unrelated to query evaluation, while maintaining all states required for
succinct query evaluation. We present the results of a thorough experimental
evaluation utilizing both real and synthetic data establishing the trade-offs
between MFS and SSG. We stress various parameters of interest in our evaluation
and demonstrate that the proposed query evaluation methodology coupled with the
proposed algorithms is capable to evaluate temporal queries over video feeds
efficiently, achieving orders of magnitude performance benefits.
| [
{
"created": "Mon, 2 Mar 2020 14:55:57 GMT",
"version": "v1"
},
{
"created": "Tue, 3 Mar 2020 14:16:18 GMT",
"version": "v2"
},
{
"created": "Thu, 5 Mar 2020 22:22:46 GMT",
"version": "v3"
}
] | 2020-03-09 | [
[
"Chen",
"Yueting",
""
],
[
"Yu",
"Xiaohui",
""
],
[
"Koudas",
"Nick",
""
]
] | Recent advances in Computer Vision and Deep Learning made possible the efficient extraction of a schema from frames of streaming video. As such, a stream of objects and their associated classes along with unique object identifiers derived via object tracking can be generated, providing unique objects as they are captured across frames. In this paper we initiate a study of temporal queries involving objects and their co-occurrences in video feeds. For example, queries that identify video segments during which the same two red cars and the same two humans appear jointly for five minutes are of interest to many applications ranging from law enforcement to security and safety. We take the first step and define such queries in a way that they incorporate certain physical aspects of video capture such as object occlusion. We present an architecture consisting of three layers, namely object detection/tracking, intermediate data generation and query evaluation. We propose two techniques,MFS and SSG, to organize all detected objects in the intermediate data generation layer, which effectively, given the queries, minimizes the number of objects and frames that have to be considered during query evaluation. We also introduce an algorithm called State Traversal (ST) that processes incoming frames against the SSG and efficiently prunes objects and frames unrelated to query evaluation, while maintaining all states required for succinct query evaluation. We present the results of a thorough experimental evaluation utilizing both real and synthetic data establishing the trade-offs between MFS and SSG. We stress various parameters of interest in our evaluation and demonstrate that the proposed query evaluation methodology coupled with the proposed algorithms is capable to evaluate temporal queries over video feeds efficiently, achieving orders of magnitude performance benefits. |
2211.05077 | Guangyue Xu | Guangyue Xu, Parisa Kordjamshidi, Joyce Chai | Prompting Large Pre-trained Vision-Language Models For Compositional
Concept Learning | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This work explores the zero-shot compositional learning ability of large
pre-trained vision-language models(VLMs) within the prompt-based learning
framework and propose a model (\textit{PromptCompVL}) to solve the compositonal
zero-shot learning (CZSL) problem. \textit{PromptCompVL} makes two design
choices: first, it uses a soft-prompting instead of hard-prompting to inject
learnable parameters to reprogram VLMs for compositional learning. Second, to
address the compositional challenge, it uses the soft-embedding layer to learn
primitive concepts in different combinations. By combining both soft-embedding
and soft-prompting, \textit{PromptCompVL} achieves state-of-the-art performance
on the MIT-States dataset. Furthermore, our proposed model achieves consistent
improvement compared to other CLIP-based methods which shows the effectiveness
of the proposed prompting strategies for CZSL.
| [
{
"created": "Wed, 9 Nov 2022 18:08:53 GMT",
"version": "v1"
}
] | 2022-11-10 | [
[
"Xu",
"Guangyue",
""
],
[
"Kordjamshidi",
"Parisa",
""
],
[
"Chai",
"Joyce",
""
]
] | This work explores the zero-shot compositional learning ability of large pre-trained vision-language models(VLMs) within the prompt-based learning framework and propose a model (\textit{PromptCompVL}) to solve the compositonal zero-shot learning (CZSL) problem. \textit{PromptCompVL} makes two design choices: first, it uses a soft-prompting instead of hard-prompting to inject learnable parameters to reprogram VLMs for compositional learning. Second, to address the compositional challenge, it uses the soft-embedding layer to learn primitive concepts in different combinations. By combining both soft-embedding and soft-prompting, \textit{PromptCompVL} achieves state-of-the-art performance on the MIT-States dataset. Furthermore, our proposed model achieves consistent improvement compared to other CLIP-based methods which shows the effectiveness of the proposed prompting strategies for CZSL. |
2010.08853 | Gilad Yehudai | Gilad Yehudai, Ethan Fetaya, Eli Meirom, Gal Chechik, Haggai Maron | From Local Structures to Size Generalization in Graph Neural Networks | Camera ready version for ICML 2021 | null | null | null | cs.LG cs.NE stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Graph neural networks (GNNs) can process graphs of different sizes, but their
ability to generalize across sizes, specifically from small to large graphs, is
still not well understood. In this paper, we identify an important type of data
where generalization from small to large graphs is challenging: graph
distributions for which the local structure depends on the graph size. This
effect occurs in multiple important graph learning domains, including social
and biological networks. We first prove that when there is a difference between
the local structures, GNNs are not guaranteed to generalize across sizes: there
are "bad" global minima that do well on small graphs but fail on large graphs.
We then study the size-generalization problem empirically and demonstrate that
when there is a discrepancy in local structure, GNNs tend to converge to
non-generalizing solutions. Finally, we suggest two approaches for improving
size generalization, motivated by our findings. Notably, we propose a novel
Self-Supervised Learning (SSL) task aimed at learning meaningful
representations of local structures that appear in large graphs. Our SSL task
improves classification accuracy on several popular datasets.
| [
{
"created": "Sat, 17 Oct 2020 19:36:54 GMT",
"version": "v1"
},
{
"created": "Wed, 17 Feb 2021 14:49:30 GMT",
"version": "v2"
},
{
"created": "Thu, 15 Jul 2021 19:18:06 GMT",
"version": "v3"
}
] | 2021-07-19 | [
[
"Yehudai",
"Gilad",
""
],
[
"Fetaya",
"Ethan",
""
],
[
"Meirom",
"Eli",
""
],
[
"Chechik",
"Gal",
""
],
[
"Maron",
"Haggai",
""
]
] | Graph neural networks (GNNs) can process graphs of different sizes, but their ability to generalize across sizes, specifically from small to large graphs, is still not well understood. In this paper, we identify an important type of data where generalization from small to large graphs is challenging: graph distributions for which the local structure depends on the graph size. This effect occurs in multiple important graph learning domains, including social and biological networks. We first prove that when there is a difference between the local structures, GNNs are not guaranteed to generalize across sizes: there are "bad" global minima that do well on small graphs but fail on large graphs. We then study the size-generalization problem empirically and demonstrate that when there is a discrepancy in local structure, GNNs tend to converge to non-generalizing solutions. Finally, we suggest two approaches for improving size generalization, motivated by our findings. Notably, we propose a novel Self-Supervised Learning (SSL) task aimed at learning meaningful representations of local structures that appear in large graphs. Our SSL task improves classification accuracy on several popular datasets. |
2404.00306 | Yang Hu | Yang Hu | Leveraging Intelligent Recommender system as a first step resilience
measure -- A data-driven supply chain disruption response framework | Manuscript submitted for WSC2024 Conference | null | null | null | cs.CE cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Interests in the value of digital technologies for its potential uses to
increase supply chain resilience (SCRes) are increasing in light to the
industry 4.0 and the global pandemic. Utilization of Recommender systems (RS)
as a supply chain (SC) resilience measure is neglected although RS is a capable
tool to enhance SC resilience from a reactive aspect. To address this problem,
this research proposed a novel data-driven supply chain disruption response
framework based on the intelligent recommender system techniques and validated
the conceptual model through a practical use case. Results show that our
framework can be implemented as an effective SC disruption mitigation measure
in the very first response phrase and help SC participants get better reaction
performance after the SC disruption.
| [
{
"created": "Sat, 30 Mar 2024 10:07:02 GMT",
"version": "v1"
},
{
"created": "Tue, 7 May 2024 16:09:06 GMT",
"version": "v2"
}
] | 2024-05-08 | [
[
"Hu",
"Yang",
""
]
] | Interests in the value of digital technologies for its potential uses to increase supply chain resilience (SCRes) are increasing in light to the industry 4.0 and the global pandemic. Utilization of Recommender systems (RS) as a supply chain (SC) resilience measure is neglected although RS is a capable tool to enhance SC resilience from a reactive aspect. To address this problem, this research proposed a novel data-driven supply chain disruption response framework based on the intelligent recommender system techniques and validated the conceptual model through a practical use case. Results show that our framework can be implemented as an effective SC disruption mitigation measure in the very first response phrase and help SC participants get better reaction performance after the SC disruption. |
2407.19265 | Riyansha Singh | Riyansha Singh, Parinita Nema, Vinod K Kurmi | Towards Robust Few-shot Class Incremental Learning in Audio
Classification using Contrastive Representation | INTERSPEECH 2024 accepted | null | null | null | cs.SD cs.LG eess.AS | http://creativecommons.org/licenses/by-nc-sa/4.0/ | In machine learning applications, gradual data ingress is common, especially
in audio processing where incremental learning is vital for real-time
analytics. Few-shot class-incremental learning addresses challenges arising
from limited incoming data. Existing methods often integrate additional
trainable components or rely on a fixed embedding extractor post-training on
base sessions to mitigate concerns related to catastrophic forgetting and the
dangers of model overfitting. However, using cross-entropy loss alone during
base session training is suboptimal for audio data. To address this, we propose
incorporating supervised contrastive learning to refine the representation
space, enhancing discriminative power and leading to better generalization
since it facilitates seamless integration of incremental classes, upon arrival.
Experimental results on NSynth and LibriSpeech datasets with 100 classes, as
well as ESC dataset with 50 and 10 classes, demonstrate state-of-the-art
performance.
| [
{
"created": "Sat, 27 Jul 2024 14:16:25 GMT",
"version": "v1"
},
{
"created": "Wed, 7 Aug 2024 09:16:12 GMT",
"version": "v2"
}
] | 2024-08-08 | [
[
"Singh",
"Riyansha",
""
],
[
"Nema",
"Parinita",
""
],
[
"Kurmi",
"Vinod K",
""
]
] | In machine learning applications, gradual data ingress is common, especially in audio processing where incremental learning is vital for real-time analytics. Few-shot class-incremental learning addresses challenges arising from limited incoming data. Existing methods often integrate additional trainable components or rely on a fixed embedding extractor post-training on base sessions to mitigate concerns related to catastrophic forgetting and the dangers of model overfitting. However, using cross-entropy loss alone during base session training is suboptimal for audio data. To address this, we propose incorporating supervised contrastive learning to refine the representation space, enhancing discriminative power and leading to better generalization since it facilitates seamless integration of incremental classes, upon arrival. Experimental results on NSynth and LibriSpeech datasets with 100 classes, as well as ESC dataset with 50 and 10 classes, demonstrate state-of-the-art performance. |
2302.06199 | Cunjun Yu | Cunjun Yu, Yiqing Xu, Linfeng Li, David Hsu | COACH: Cooperative Robot Teaching | CoRL 2022 | null | null | null | cs.RO | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Knowledge and skills can transfer from human teachers to human students.
However, such direct transfer is often not scalable for physical tasks, as they
require one-to-one interaction, and human teachers are not available in
sufficient numbers. Machine learning enables robots to become experts and play
the role of teachers to help in this situation. In this work, we formalize
cooperative robot teaching as a Markov game, consisting of four key elements:
the target task, the student model, the teacher model, and the interactive
teaching-learning process. Under a moderate assumption, the Markov game reduces
to a partially observable Markov decision process, with an efficient
approximate solution. We illustrate our approach on two cooperative tasks, one
in a simulated video game and one with a real robot.
| [
{
"created": "Mon, 13 Feb 2023 09:15:45 GMT",
"version": "v1"
}
] | 2023-02-14 | [
[
"Yu",
"Cunjun",
""
],
[
"Xu",
"Yiqing",
""
],
[
"Li",
"Linfeng",
""
],
[
"Hsu",
"David",
""
]
] | Knowledge and skills can transfer from human teachers to human students. However, such direct transfer is often not scalable for physical tasks, as they require one-to-one interaction, and human teachers are not available in sufficient numbers. Machine learning enables robots to become experts and play the role of teachers to help in this situation. In this work, we formalize cooperative robot teaching as a Markov game, consisting of four key elements: the target task, the student model, the teacher model, and the interactive teaching-learning process. Under a moderate assumption, the Markov game reduces to a partially observable Markov decision process, with an efficient approximate solution. We illustrate our approach on two cooperative tasks, one in a simulated video game and one with a real robot. |
2009.14721 | Mohamed Abbas Hedjazi | Mohamed Abbas Hedjazi, Yakup Genc | Efficient texture-aware multi-GAN for image inpainting | 25 pages, 15 figures, 11 tables | Knowledge-Based Systems, Volume 217, 6 April 2021, 106789 | 10.1016/j.knosys.2021.106789 | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Recent GAN-based (Generative adversarial networks) inpainting methods show
remarkable improvements and generate plausible images using multi-stage
networks or Contextual Attention Modules (CAM). However, these techniques
increase the model complexity limiting their application in low-resource
environments. Furthermore, they fail in generating high-resolution images with
realistic texture details due to the GAN stability problem. Motivated by these
observations, we propose a multi-GAN architecture improving both the
performance and rendering efficiency. Our training schema optimizes the
parameters of four progressive efficient generators and discriminators in an
end-to-end manner. Filling in low-resolution images is less challenging for
GANs due to the small dimensional space. Meanwhile, it guides higher resolution
generators to learn the global structure consistency of the image. To constrain
the inpainting task and ensure fine-grained textures, we adopt an LBP-based
loss function to minimize the difference between the generated and the ground
truth textures. We conduct our experiments on Places2 and CelebHQ datasets.
Qualitative and quantitative results show that the proposed method not only
performs favorably against state-of-the-art algorithms but also speeds up the
inference time.
| [
{
"created": "Wed, 30 Sep 2020 14:58:03 GMT",
"version": "v1"
},
{
"created": "Sat, 13 Feb 2021 15:19:43 GMT",
"version": "v2"
}
] | 2021-02-16 | [
[
"Hedjazi",
"Mohamed Abbas",
""
],
[
"Genc",
"Yakup",
""
]
] | Recent GAN-based (Generative adversarial networks) inpainting methods show remarkable improvements and generate plausible images using multi-stage networks or Contextual Attention Modules (CAM). However, these techniques increase the model complexity limiting their application in low-resource environments. Furthermore, they fail in generating high-resolution images with realistic texture details due to the GAN stability problem. Motivated by these observations, we propose a multi-GAN architecture improving both the performance and rendering efficiency. Our training schema optimizes the parameters of four progressive efficient generators and discriminators in an end-to-end manner. Filling in low-resolution images is less challenging for GANs due to the small dimensional space. Meanwhile, it guides higher resolution generators to learn the global structure consistency of the image. To constrain the inpainting task and ensure fine-grained textures, we adopt an LBP-based loss function to minimize the difference between the generated and the ground truth textures. We conduct our experiments on Places2 and CelebHQ datasets. Qualitative and quantitative results show that the proposed method not only performs favorably against state-of-the-art algorithms but also speeds up the inference time. |
1704.02767 | Manuela Fischer | Manuela Fischer, Mohsen Ghaffari, Fabian Kuhn | Deterministic Distributed Edge-Coloring via Hypergraph Maximal Matching | null | null | null | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a deterministic distributed algorithm that computes a
$(2\Delta-1)$-edge-coloring, or even list-edge-coloring, in any $n$-node graph
with maximum degree $\Delta$, in $O(\log^7 \Delta \log n)$ rounds. This answers
one of the long-standing open questions of \emph{distributed graph algorithms}
from the late 1980s, which asked for a polylogarithmic-time algorithm. See,
e.g., Open Problem 4 in the Distributed Graph Coloring book of Barenboim and
Elkin. The previous best round complexities were $2^{O(\sqrt{\log n})}$ by
Panconesi and Srinivasan [STOC'92] and $\tilde{O}(\sqrt{\Delta}) + O(\log^* n)$
by Fraigniaud, Heinrich, and Kosowski [FOCS'16]. A corollary of our
deterministic list-edge-coloring also improves the randomized complexity of
$(2\Delta-1)$-edge-coloring to poly$(\log\log n)$ rounds.
The key technical ingredient is a deterministic distributed algorithm for
\emph{hypergraph maximal matching}, which we believe will be of interest beyond
this result. In any hypergraph of rank $r$ --- where each hyperedge has at most
$r$ vertices --- with $n$ nodes and maximum degree $\Delta$, this algorithm
computes a maximal matching in $O(r^5 \log^{6+\log r } \Delta \log n)$ rounds.
This hypergraph matching algorithm and its extensions lead to a number of
other results. In particular, a polylogarithmic-time deterministic distributed
maximal independent set algorithm for graphs with bounded neighborhood
independence, hence answering Open Problem 5 of Barenboim and Elkin's book, a
$((\log \Delta/\varepsilon)^{O(\log (1/\varepsilon))})$-round deterministic
algorithm for $(1+\varepsilon)$-approximation of maximum matching, and a
quasi-polylogarithmic-time deterministic distributed algorithm for orienting
$\lambda$-arboricity graphs with out-degree at most $(1+\varepsilon)\lambda$,
for any constant $\varepsilon>0$, hence partially answering Open Problem 10 of
Barenboim and Elkin's book.
| [
{
"created": "Mon, 10 Apr 2017 09:03:11 GMT",
"version": "v1"
}
] | 2017-04-11 | [
[
"Fischer",
"Manuela",
""
],
[
"Ghaffari",
"Mohsen",
""
],
[
"Kuhn",
"Fabian",
""
]
] | We present a deterministic distributed algorithm that computes a $(2\Delta-1)$-edge-coloring, or even list-edge-coloring, in any $n$-node graph with maximum degree $\Delta$, in $O(\log^7 \Delta \log n)$ rounds. This answers one of the long-standing open questions of \emph{distributed graph algorithms} from the late 1980s, which asked for a polylogarithmic-time algorithm. See, e.g., Open Problem 4 in the Distributed Graph Coloring book of Barenboim and Elkin. The previous best round complexities were $2^{O(\sqrt{\log n})}$ by Panconesi and Srinivasan [STOC'92] and $\tilde{O}(\sqrt{\Delta}) + O(\log^* n)$ by Fraigniaud, Heinrich, and Kosowski [FOCS'16]. A corollary of our deterministic list-edge-coloring also improves the randomized complexity of $(2\Delta-1)$-edge-coloring to poly$(\log\log n)$ rounds. The key technical ingredient is a deterministic distributed algorithm for \emph{hypergraph maximal matching}, which we believe will be of interest beyond this result. In any hypergraph of rank $r$ --- where each hyperedge has at most $r$ vertices --- with $n$ nodes and maximum degree $\Delta$, this algorithm computes a maximal matching in $O(r^5 \log^{6+\log r } \Delta \log n)$ rounds. This hypergraph matching algorithm and its extensions lead to a number of other results. In particular, a polylogarithmic-time deterministic distributed maximal independent set algorithm for graphs with bounded neighborhood independence, hence answering Open Problem 5 of Barenboim and Elkin's book, a $((\log \Delta/\varepsilon)^{O(\log (1/\varepsilon))})$-round deterministic algorithm for $(1+\varepsilon)$-approximation of maximum matching, and a quasi-polylogarithmic-time deterministic distributed algorithm for orienting $\lambda$-arboricity graphs with out-degree at most $(1+\varepsilon)\lambda$, for any constant $\varepsilon>0$, hence partially answering Open Problem 10 of Barenboim and Elkin's book. |
2110.07103 | Chuong Nguyen | Chuong Nguyen, Dadong Wang, Karl Von Richter, Philip Valencia, Flavio
A. P. Alvarenga, Gregory Bishop-Hurley | Video-based cattle identification and action recognition | 5 pages, 7 figures, DICTA2021 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We demonstrate a working prototype for the monitoring of cow welfare by
automatically analysing the animal behaviours. Deep learning models have been
developed and tested with videos acquired in a farm, and a precision of 81.2\%
has been achieved for cow identification. An accuracy of 84.4\% has been
achieved for the detection of drinking events, and 94.4\% for the detection of
grazing events. Experimental results show that the proposed deep learning
method can be used to identify the behaviours of individual animals to enable
automated farm provenance. Our raw and ground-truth dataset will be released as
the first public video dataset for cow identification and action recognition.
Recommendations for further development are also provided.
| [
{
"created": "Thu, 14 Oct 2021 00:55:56 GMT",
"version": "v1"
}
] | 2021-10-15 | [
[
"Nguyen",
"Chuong",
""
],
[
"Wang",
"Dadong",
""
],
[
"Von Richter",
"Karl",
""
],
[
"Valencia",
"Philip",
""
],
[
"Alvarenga",
"Flavio A. P.",
""
],
[
"Bishop-Hurley",
"Gregory",
""
]
] | We demonstrate a working prototype for the monitoring of cow welfare by automatically analysing the animal behaviours. Deep learning models have been developed and tested with videos acquired in a farm, and a precision of 81.2\% has been achieved for cow identification. An accuracy of 84.4\% has been achieved for the detection of drinking events, and 94.4\% for the detection of grazing events. Experimental results show that the proposed deep learning method can be used to identify the behaviours of individual animals to enable automated farm provenance. Our raw and ground-truth dataset will be released as the first public video dataset for cow identification and action recognition. Recommendations for further development are also provided. |
2008.06570 | Monica Ribero | Peter Kairouz, M\'onica Ribero, Keith Rush, Abhradeep Thakurta | Fast Dimension Independent Private AdaGrad on Publicly Estimated
Subspaces | null | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We revisit the problem of empirical risk minimziation (ERM) with differential
privacy. We show that noisy AdaGrad, given appropriate knowledge and conditions
on the subspace from which gradients can be drawn, achieves a regret comparable
to traditional AdaGrad plus a well-controlled term due to noise. We show a
convergence rate of $O(\text{Tr}(G_T)/T)$, where $G_T$ captures the geometry of
the gradient subspace. Since $\text{Tr}(G_T)=O(\sqrt{T})$ we can obtain faster
rates for convex and Lipschitz functions, compared to the $O(1/\sqrt{T})$ rate
achieved by known versions of noisy (stochastic) gradient descent with
comparable noise variance. In particular, we show that if the gradients lie in
a known constant rank subspace, and assuming algorithmic access to an envelope
which bounds decaying sensitivity, one can achieve faster convergence to an
excess empirical risk of $\tilde O(1/\epsilon n)$, where $\epsilon$ is the
privacy budget and $n$ the number of samples. Letting $p$ be the problem
dimension, this result implies that, by running noisy Adagrad, we can bypass
the DP-SGD bound $\tilde O(\sqrt{p}/\epsilon n)$ in $T=(\epsilon
n)^{2/(1+2\alpha)}$ iterations, where $\alpha \geq 0$ is a parameter
controlling gradient norm decay, instead of the rate achieved by SGD of
$T=\epsilon^2n^2$. Our results operate with general convex functions in both
constrained and unconstrained minimization.
Along the way, we do a perturbation analysis of noisy AdaGrad of independent
interest. Our utility guarantee for the private ERM problem follows as a
corollary to the regret guarantee of noisy AdaGrad.
| [
{
"created": "Fri, 14 Aug 2020 20:46:38 GMT",
"version": "v1"
},
{
"created": "Sat, 30 Jan 2021 23:34:27 GMT",
"version": "v2"
}
] | 2021-02-02 | [
[
"Kairouz",
"Peter",
""
],
[
"Ribero",
"Mónica",
""
],
[
"Rush",
"Keith",
""
],
[
"Thakurta",
"Abhradeep",
""
]
] | We revisit the problem of empirical risk minimziation (ERM) with differential privacy. We show that noisy AdaGrad, given appropriate knowledge and conditions on the subspace from which gradients can be drawn, achieves a regret comparable to traditional AdaGrad plus a well-controlled term due to noise. We show a convergence rate of $O(\text{Tr}(G_T)/T)$, where $G_T$ captures the geometry of the gradient subspace. Since $\text{Tr}(G_T)=O(\sqrt{T})$ we can obtain faster rates for convex and Lipschitz functions, compared to the $O(1/\sqrt{T})$ rate achieved by known versions of noisy (stochastic) gradient descent with comparable noise variance. In particular, we show that if the gradients lie in a known constant rank subspace, and assuming algorithmic access to an envelope which bounds decaying sensitivity, one can achieve faster convergence to an excess empirical risk of $\tilde O(1/\epsilon n)$, where $\epsilon$ is the privacy budget and $n$ the number of samples. Letting $p$ be the problem dimension, this result implies that, by running noisy Adagrad, we can bypass the DP-SGD bound $\tilde O(\sqrt{p}/\epsilon n)$ in $T=(\epsilon n)^{2/(1+2\alpha)}$ iterations, where $\alpha \geq 0$ is a parameter controlling gradient norm decay, instead of the rate achieved by SGD of $T=\epsilon^2n^2$. Our results operate with general convex functions in both constrained and unconstrained minimization. Along the way, we do a perturbation analysis of noisy AdaGrad of independent interest. Our utility guarantee for the private ERM problem follows as a corollary to the regret guarantee of noisy AdaGrad. |
2401.03329 | Zhonghao Shi | Emily Zhou, Zhonghao Shi, Xiaoyang Qiao, Maja J Matari\'c, Ava K
Bittner | Designing a Socially Assistive Robot to Support Older Adults with Low
Vision | Published in Social Robotics: 13th International Conference, ICSR
2021. Springer International Publishing | null | 10.1007/978-3-030-90525-5_38 | null | cs.RO cs.HC | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Socially assistive robots (SARs) have shown great promise in supplementing
and augmenting interventions to support the physical and mental well-being of
older adults. However, past work has not yet explored the potential of applying
SAR to lower the barriers of long-term low vision rehabilitation (LVR)
interventions for older adults. In this work, we present a user-informed design
process to validate the motivation and identify major design principles for
developing SAR for long-term LVR. To evaluate user-perceived usefulness and
acceptance of SAR in this novel domain, we performed a two-phase study through
user surveys. First, a group (n=38) of older adults with LV completed a
mailed-in survey. Next, a new group (n=13) of older adults with LV saw an
in-clinic SAR demo and then completed the survey. The study participants
reported that SARs would be useful, trustworthy, easy to use, and enjoyable
while providing socio-emotional support to augment LVR interventions. The
in-clinic demo group reported significantly more positive opinions of the SAR's
capabilities than did the baseline survey group that used mailed-in forms
without the SAR demo.
| [
{
"created": "Sat, 6 Jan 2024 23:23:02 GMT",
"version": "v1"
}
] | 2024-01-09 | [
[
"Zhou",
"Emily",
""
],
[
"Shi",
"Zhonghao",
""
],
[
"Qiao",
"Xiaoyang",
""
],
[
"Matarić",
"Maja J",
""
],
[
"Bittner",
"Ava K",
""
]
] | Socially assistive robots (SARs) have shown great promise in supplementing and augmenting interventions to support the physical and mental well-being of older adults. However, past work has not yet explored the potential of applying SAR to lower the barriers of long-term low vision rehabilitation (LVR) interventions for older adults. In this work, we present a user-informed design process to validate the motivation and identify major design principles for developing SAR for long-term LVR. To evaluate user-perceived usefulness and acceptance of SAR in this novel domain, we performed a two-phase study through user surveys. First, a group (n=38) of older adults with LV completed a mailed-in survey. Next, a new group (n=13) of older adults with LV saw an in-clinic SAR demo and then completed the survey. The study participants reported that SARs would be useful, trustworthy, easy to use, and enjoyable while providing socio-emotional support to augment LVR interventions. The in-clinic demo group reported significantly more positive opinions of the SAR's capabilities than did the baseline survey group that used mailed-in forms without the SAR demo. |
2306.12203 | David Gillsj\"o | David Gillsj\"o, Gabrielle Flood, Kalle {\AA}str\"om | Polygon Detection for Room Layout Estimation using Heterogeneous Graphs
and Wireframes | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents a neural network based semantic plane detection method
utilizing polygon representations. The method can for example be used to solve
room layout estimations tasks. The method is built on, combines and further
develops several different modules from previous research. The network takes an
RGB image and estimates a wireframe as well as a feature space using an
hourglass backbone. From these, line and junction features are sampled. The
lines and junctions are then represented as an undirected graph, from which
polygon representations of the sought planes are obtained. Two different
methods for this last step are investigated, where the most promising method is
built on a heterogeneous graph transformer. The final output is in all cases a
projection of the semantic planes in 2D. The methods are evaluated on the
Structured 3D dataset and we investigate the performance both using sampled and
estimated wireframes. The experiments show the potential of the graph-based
method by outperforming state of the art methods in Room Layout estimation in
the 2D metrics using synthetic wireframe detections.
| [
{
"created": "Wed, 21 Jun 2023 11:55:15 GMT",
"version": "v1"
}
] | 2023-06-22 | [
[
"Gillsjö",
"David",
""
],
[
"Flood",
"Gabrielle",
""
],
[
"Åström",
"Kalle",
""
]
] | This paper presents a neural network based semantic plane detection method utilizing polygon representations. The method can for example be used to solve room layout estimations tasks. The method is built on, combines and further develops several different modules from previous research. The network takes an RGB image and estimates a wireframe as well as a feature space using an hourglass backbone. From these, line and junction features are sampled. The lines and junctions are then represented as an undirected graph, from which polygon representations of the sought planes are obtained. Two different methods for this last step are investigated, where the most promising method is built on a heterogeneous graph transformer. The final output is in all cases a projection of the semantic planes in 2D. The methods are evaluated on the Structured 3D dataset and we investigate the performance both using sampled and estimated wireframes. The experiments show the potential of the graph-based method by outperforming state of the art methods in Room Layout estimation in the 2D metrics using synthetic wireframe detections. |
2209.03603 | Jinxiang Lai | Jinxiang Lai, Wenlong Liu, Jun Liu | nVFNet-RDC: Replay and Non-Local Distillation Collaboration for
Continual Object Detection | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Continual Learning (CL) focuses on developing algorithms with the ability to
adapt to new environments and learn new skills. This very challenging task has
generated a lot of interest in recent years, with new solutions appearing
rapidly. In this paper, we propose a nVFNet-RDC approach for continual object
detection. Our nVFNet-RDC consists of teacher-student models, and adopts replay
and feature distillation strategies. As the 1st place solutions, we achieve
55.94% and 54.65% average mAP on the 3rd CLVision Challenge Track 2 and Track
3, respectively.
| [
{
"created": "Thu, 8 Sep 2022 06:59:42 GMT",
"version": "v1"
}
] | 2022-09-09 | [
[
"Lai",
"Jinxiang",
""
],
[
"Liu",
"Wenlong",
""
],
[
"Liu",
"Jun",
""
]
] | Continual Learning (CL) focuses on developing algorithms with the ability to adapt to new environments and learn new skills. This very challenging task has generated a lot of interest in recent years, with new solutions appearing rapidly. In this paper, we propose a nVFNet-RDC approach for continual object detection. Our nVFNet-RDC consists of teacher-student models, and adopts replay and feature distillation strategies. As the 1st place solutions, we achieve 55.94% and 54.65% average mAP on the 3rd CLVision Challenge Track 2 and Track 3, respectively. |
cs/0106042 | Judith Beumer | William McCune | MACE 2.0 Reference Manual and Guide | 10 pages | null | null | ANL/MCS-TM-249 | cs.LO cs.SC | null | MACE is a program that searches for finite models of first-order statements.
The statement to be modeled is first translated to clauses, then to relational
clauses; finally for the given domain size, the ground instances are
constructed. A Davis-Putnam-Loveland-Logeman procedure decides the
propositional problem, and any models found are translated to first-order
models. MACE is a useful complement to the theorem prover Otter, with Otter
searching for proofs and MACE looking for countermodels.
| [
{
"created": "Tue, 19 Jun 2001 16:11:19 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"McCune",
"William",
""
]
] | MACE is a program that searches for finite models of first-order statements. The statement to be modeled is first translated to clauses, then to relational clauses; finally for the given domain size, the ground instances are constructed. A Davis-Putnam-Loveland-Logeman procedure decides the propositional problem, and any models found are translated to first-order models. MACE is a useful complement to the theorem prover Otter, with Otter searching for proofs and MACE looking for countermodels. |
1810.08124 | Lina Al-Kanj Dr. | Lina Al-Kanj, Juliana Nascimento and Warren B. Powell | Approximate Dynamic Programming for Planning a Ride-Sharing System using
Autonomous Fleets of Electric Vehicles | null | null | null | null | cs.AI cs.SY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Within a decade, almost every major auto company, along with fleet operators
such as Uber, have announced plans to put autonomous vehicles on the road. At
the same time, electric vehicles are quickly emerging as a next-generation
technology that is cost effective, in addition to offering the benefits of
reducing the carbon footprint. The combination of a centrally managed fleet of
driverless vehicles, along with the operating characteristics of electric
vehicles, is creating a transformative new technology that offers significant
cost savings with high service levels. This problem involves a dispatch problem
for assigning riders to cars, a surge pricing problem for deciding on the price
per trip and a planning problem for deciding on the fleet size. We use
approximate dynamic programming to develop high-quality operational dispatch
strategies to determine which car is best for a particular trip, when a car
should be recharged, and when it should be re-positioned to a different zone
which offers a higher density of trips. We prove that the value functions are
monotone in the battery and time dimensions and use hierarchical aggregation to
get better estimates of the value functions with a small number of
observations. Then, surge pricing is discussed using an adaptive learning
approach to decide on the price for each trip. Finally, we discuss the fleet
size problem which depends on the previous two problems.
| [
{
"created": "Thu, 18 Oct 2018 15:54:58 GMT",
"version": "v1"
},
{
"created": "Tue, 11 Dec 2018 22:57:04 GMT",
"version": "v2"
}
] | 2018-12-13 | [
[
"Al-Kanj",
"Lina",
""
],
[
"Nascimento",
"Juliana",
""
],
[
"Powell",
"Warren B.",
""
]
] | Within a decade, almost every major auto company, along with fleet operators such as Uber, have announced plans to put autonomous vehicles on the road. At the same time, electric vehicles are quickly emerging as a next-generation technology that is cost effective, in addition to offering the benefits of reducing the carbon footprint. The combination of a centrally managed fleet of driverless vehicles, along with the operating characteristics of electric vehicles, is creating a transformative new technology that offers significant cost savings with high service levels. This problem involves a dispatch problem for assigning riders to cars, a surge pricing problem for deciding on the price per trip and a planning problem for deciding on the fleet size. We use approximate dynamic programming to develop high-quality operational dispatch strategies to determine which car is best for a particular trip, when a car should be recharged, and when it should be re-positioned to a different zone which offers a higher density of trips. We prove that the value functions are monotone in the battery and time dimensions and use hierarchical aggregation to get better estimates of the value functions with a small number of observations. Then, surge pricing is discussed using an adaptive learning approach to decide on the price for each trip. Finally, we discuss the fleet size problem which depends on the previous two problems. |
2009.04614 | Kun Fang | Kun Fang, Fanghui Liu, Xiaolin Huang and Jie Yang | End-to-end Kernel Learning via Generative Random Fourier Features | Accepted by Pattern Recognition | null | 10.1016/j.patcog.2022.109057 | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Random Fourier features (RFFs) provide a promising way for kernel learning in
a spectral case. Current RFFs-based kernel learning methods usually work in a
two-stage way. In the first-stage process, learning the optimal feature map is
often formulated as a target alignment problem, which aims to align the learned
kernel with the pre-defined target kernel (usually the ideal kernel). In the
second-stage process, a linear learner is conducted with respect to the mapped
random features. Nevertheless, the pre-defined kernel in target alignment is
not necessarily optimal for the generalization of the linear learner. Instead,
in this paper, we consider a one-stage process that incorporates the kernel
learning and linear learner into a unifying framework. To be specific, a
generative network via RFFs is devised to implicitly learn the kernel, followed
by a linear classifier parameterized as a full-connected layer. Then the
generative network and the classifier are jointly trained by solving the
empirical risk minimization (ERM) problem to reach a one-stage solution. This
end-to-end scheme naturally allows deeper features, in correspondence to a
multi-layer structure, and shows superior generalization performance over the
classical two-stage, RFFs-based methods in real-world classification tasks.
Moreover, inspired by the randomized resampling mechanism of the proposed
method, its enhanced adversarial robustness is investigated and experimentally
verified.
| [
{
"created": "Thu, 10 Sep 2020 00:27:39 GMT",
"version": "v1"
},
{
"created": "Fri, 10 Dec 2021 08:55:10 GMT",
"version": "v2"
},
{
"created": "Tue, 14 Jun 2022 08:02:10 GMT",
"version": "v3"
},
{
"created": "Tue, 22 Nov 2022 07:52:06 GMT",
"version": "v4"
},
{
"created": "Tue, 16 Jan 2024 02:54:58 GMT",
"version": "v5"
}
] | 2024-01-17 | [
[
"Fang",
"Kun",
""
],
[
"Liu",
"Fanghui",
""
],
[
"Huang",
"Xiaolin",
""
],
[
"Yang",
"Jie",
""
]
] | Random Fourier features (RFFs) provide a promising way for kernel learning in a spectral case. Current RFFs-based kernel learning methods usually work in a two-stage way. In the first-stage process, learning the optimal feature map is often formulated as a target alignment problem, which aims to align the learned kernel with the pre-defined target kernel (usually the ideal kernel). In the second-stage process, a linear learner is conducted with respect to the mapped random features. Nevertheless, the pre-defined kernel in target alignment is not necessarily optimal for the generalization of the linear learner. Instead, in this paper, we consider a one-stage process that incorporates the kernel learning and linear learner into a unifying framework. To be specific, a generative network via RFFs is devised to implicitly learn the kernel, followed by a linear classifier parameterized as a full-connected layer. Then the generative network and the classifier are jointly trained by solving the empirical risk minimization (ERM) problem to reach a one-stage solution. This end-to-end scheme naturally allows deeper features, in correspondence to a multi-layer structure, and shows superior generalization performance over the classical two-stage, RFFs-based methods in real-world classification tasks. Moreover, inspired by the randomized resampling mechanism of the proposed method, its enhanced adversarial robustness is investigated and experimentally verified. |
2401.11888 | Junichiro Niimi Dr | Junichiro Niimi | Multimodal Deep Learning of Word-of-Mouth Text and Demographics to
Predict Customer Rating: Handling Consumer Heterogeneity in Marketing | null | null | null | null | cs.CE cs.LG | http://creativecommons.org/licenses/by/4.0/ | In the marketing field, understanding consumer heterogeneity, which is the
internal or psychological difference among consumers that cannot be captured by
behavioral logs, has long been a critical challenge. However, a number of
consumers today usually post their evaluation on the specific product on the
online platform, which can be the valuable source of such unobservable
differences among consumers. Several previous studies have shown the validity
of the analysis on text modality, but on the other hand, such analyses may not
necessarily demonstrate sufficient predictive accuracy for text alone, as they
may not include information readily available from cross-sectional data, such
as consumer profile data. In addition, recent advances in machine learning
techniques, such as large-scale language models (LLMs) and multimodal learning
have made it possible to deal with the various kind of dataset simultaneously,
including textual data and the traditional cross-sectional data, and the joint
representations can be effectively obtained from multiple modalities.
Therefore, this study constructs a product evaluation model that takes into
account consumer heterogeneity by multimodal learning of online product reviews
and consumer profile information. We also compare multiple models using
different modalities or hyper-parameters to demonstrate the robustness of
multimodal learning in marketing analysis.
| [
{
"created": "Mon, 22 Jan 2024 12:28:50 GMT",
"version": "v1"
}
] | 2024-01-23 | [
[
"Niimi",
"Junichiro",
""
]
] | In the marketing field, understanding consumer heterogeneity, which is the internal or psychological difference among consumers that cannot be captured by behavioral logs, has long been a critical challenge. However, a number of consumers today usually post their evaluation on the specific product on the online platform, which can be the valuable source of such unobservable differences among consumers. Several previous studies have shown the validity of the analysis on text modality, but on the other hand, such analyses may not necessarily demonstrate sufficient predictive accuracy for text alone, as they may not include information readily available from cross-sectional data, such as consumer profile data. In addition, recent advances in machine learning techniques, such as large-scale language models (LLMs) and multimodal learning have made it possible to deal with the various kind of dataset simultaneously, including textual data and the traditional cross-sectional data, and the joint representations can be effectively obtained from multiple modalities. Therefore, this study constructs a product evaluation model that takes into account consumer heterogeneity by multimodal learning of online product reviews and consumer profile information. We also compare multiple models using different modalities or hyper-parameters to demonstrate the robustness of multimodal learning in marketing analysis. |
2012.13137 | Naeha Sharif | Naeha Sharif, Lyndon White, Mohammed Bennamoun, Wei Liu, Syed Afaq Ali
Shah | WEmbSim: A Simple yet Effective Metric for Image Captioning | 7 pages | International Conference on Digital Image Computing: Techniques
and Applications (DICTA), 2020 | null | null | cs.CV cs.AI cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The area of automatic image caption evaluation is still undergoing intensive
research to address the needs of generating captions which can meet adequacy
and fluency requirements. Based on our past attempts at developing highly
sophisticated learning-based metrics, we have discovered that a simple cosine
similarity measure using the Mean of Word Embeddings(MOWE) of captions can
actually achieve a surprisingly high performance on unsupervised caption
evaluation. This inspires our proposed work on an effective metric WEmbSim,
which beats complex measures such as SPICE, CIDEr and WMD at system-level
correlation with human judgments. Moreover, it also achieves the best accuracy
at matching human consensus scores for caption pairs, against commonly used
unsupervised methods. Therefore, we believe that WEmbSim sets a new baseline
for any complex metric to be justified.
| [
{
"created": "Thu, 24 Dec 2020 06:39:43 GMT",
"version": "v1"
}
] | 2020-12-25 | [
[
"Sharif",
"Naeha",
""
],
[
"White",
"Lyndon",
""
],
[
"Bennamoun",
"Mohammed",
""
],
[
"Liu",
"Wei",
""
],
[
"Shah",
"Syed Afaq Ali",
""
]
] | The area of automatic image caption evaluation is still undergoing intensive research to address the needs of generating captions which can meet adequacy and fluency requirements. Based on our past attempts at developing highly sophisticated learning-based metrics, we have discovered that a simple cosine similarity measure using the Mean of Word Embeddings(MOWE) of captions can actually achieve a surprisingly high performance on unsupervised caption evaluation. This inspires our proposed work on an effective metric WEmbSim, which beats complex measures such as SPICE, CIDEr and WMD at system-level correlation with human judgments. Moreover, it also achieves the best accuracy at matching human consensus scores for caption pairs, against commonly used unsupervised methods. Therefore, we believe that WEmbSim sets a new baseline for any complex metric to be justified. |
2212.10556 | Junyang Wu | Junyang Wu, Xianhang Li, Chen Wei, Huiyu Wang, Alan Yuille, Yuyin
Zhou, Cihang Xie | Unleashing the Power of Visual Prompting At the Pixel Level | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents a simple and effective visual prompting method for
adapting pre-trained models to downstream recognition tasks. Our method
includes two key designs. First, rather than directly adding together the
prompt and the image, we treat the prompt as an extra and independent learnable
component. We show that the strategy of reconciling the prompt and the image
matters, and find that warping the prompt around a properly shrinked image
empirically works the best. Second, we re-introduce two "old tricks" commonly
used in building transferable adversarial examples, i.e., input diversity and
gradient normalization, into visual prompting. These techniques improve
optimization and enable the prompt to generalize better. We provide extensive
experimental results to demonstrate the effectiveness of our method. Using a
CLIP model, our prompting method sets a new record of 82.8% average accuracy
across 12 popular classification datasets, substantially surpassing the prior
art by +5.6%. It is worth noting that this prompting performance already
outperforms linear probing by +2.1% and can even match fully fine-tuning in
certain datasets. In addition, our prompting method shows competitive
performance across different data scales and against distribution shifts. The
code is publicly available at https://github.com/UCSC-VLAA/EVP.
| [
{
"created": "Tue, 20 Dec 2022 18:57:06 GMT",
"version": "v1"
},
{
"created": "Wed, 29 Mar 2023 06:49:51 GMT",
"version": "v2"
}
] | 2023-03-30 | [
[
"Wu",
"Junyang",
""
],
[
"Li",
"Xianhang",
""
],
[
"Wei",
"Chen",
""
],
[
"Wang",
"Huiyu",
""
],
[
"Yuille",
"Alan",
""
],
[
"Zhou",
"Yuyin",
""
],
[
"Xie",
"Cihang",
""
]
] | This paper presents a simple and effective visual prompting method for adapting pre-trained models to downstream recognition tasks. Our method includes two key designs. First, rather than directly adding together the prompt and the image, we treat the prompt as an extra and independent learnable component. We show that the strategy of reconciling the prompt and the image matters, and find that warping the prompt around a properly shrinked image empirically works the best. Second, we re-introduce two "old tricks" commonly used in building transferable adversarial examples, i.e., input diversity and gradient normalization, into visual prompting. These techniques improve optimization and enable the prompt to generalize better. We provide extensive experimental results to demonstrate the effectiveness of our method. Using a CLIP model, our prompting method sets a new record of 82.8% average accuracy across 12 popular classification datasets, substantially surpassing the prior art by +5.6%. It is worth noting that this prompting performance already outperforms linear probing by +2.1% and can even match fully fine-tuning in certain datasets. In addition, our prompting method shows competitive performance across different data scales and against distribution shifts. The code is publicly available at https://github.com/UCSC-VLAA/EVP. |
2106.07611 | Santiago Miret | Santiago Miret, Vui Seng Chua, Mattias Marder, Mariano Phielipp,
Nilesh Jain, Somdeb Majumdar | Neuroevolution-Enhanced Multi-Objective Optimization for Mixed-Precision
Quantization | null | null | null | null | cs.NE cs.AI | http://creativecommons.org/licenses/by/4.0/ | Mixed-precision quantization is a powerful tool to enable memory and compute
savings of neural network workloads by deploying different sets of bit-width
precisions on separate compute operations. In this work, we present a flexible
and scalable framework for automated mixed-precision quantization that
concurrently optimizes task performance, memory compression, and compute
savings through multi-objective evolutionary computing. Our framework centers
on Neuroevolution-Enhanced Multi-Objective Optimization (NEMO), a novel search
method, which combines established search methods with the representational
power of neural networks. Within NEMO, the population is divided into
structurally distinct sub-populations, or species, which jointly create the
Pareto frontier of solutions for the multi-objective problem. At each
generation, species perform separate mutation and crossover operations, and are
re-sized in proportion to the goodness of their contribution to the Pareto
frontier. In our experiments, we define a graph-based representation to
describe the underlying workload, enabling us to deploy graph neural networks
trained by NEMO via neuroevolution, to find Pareto optimal configurations for
MobileNet-V2, ResNet50 and ResNeXt-101-32x8d. Compared to the state-of-the-art,
we achieve competitive results on memory compression and superior results for
compute compression. Further analysis reveals that the graph representation and
the species-based approach employed by NEMO are critical to finding optimal
solutions.
| [
{
"created": "Mon, 14 Jun 2021 17:15:15 GMT",
"version": "v1"
},
{
"created": "Sat, 2 Apr 2022 00:10:14 GMT",
"version": "v2"
}
] | 2022-04-05 | [
[
"Miret",
"Santiago",
""
],
[
"Chua",
"Vui Seng",
""
],
[
"Marder",
"Mattias",
""
],
[
"Phielipp",
"Mariano",
""
],
[
"Jain",
"Nilesh",
""
],
[
"Majumdar",
"Somdeb",
""
]
] | Mixed-precision quantization is a powerful tool to enable memory and compute savings of neural network workloads by deploying different sets of bit-width precisions on separate compute operations. In this work, we present a flexible and scalable framework for automated mixed-precision quantization that concurrently optimizes task performance, memory compression, and compute savings through multi-objective evolutionary computing. Our framework centers on Neuroevolution-Enhanced Multi-Objective Optimization (NEMO), a novel search method, which combines established search methods with the representational power of neural networks. Within NEMO, the population is divided into structurally distinct sub-populations, or species, which jointly create the Pareto frontier of solutions for the multi-objective problem. At each generation, species perform separate mutation and crossover operations, and are re-sized in proportion to the goodness of their contribution to the Pareto frontier. In our experiments, we define a graph-based representation to describe the underlying workload, enabling us to deploy graph neural networks trained by NEMO via neuroevolution, to find Pareto optimal configurations for MobileNet-V2, ResNet50 and ResNeXt-101-32x8d. Compared to the state-of-the-art, we achieve competitive results on memory compression and superior results for compute compression. Further analysis reveals that the graph representation and the species-based approach employed by NEMO are critical to finding optimal solutions. |
1509.09188 | Pavel Kolev | Pavel Kolev and Kurt Mehlhorn | Approximate Spectral Clustering: Efficiency and Guarantees | A preliminary version of this paper was presented at the 24th Annual
European Symposium on Algorithms (ESA 2016) | null | null | null | cs.DM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Approximate Spectral Clustering (ASC) is a popular and successful heuristic
for partitioning the nodes of a graph $G$ into clusters for which the ratio of
outside connections compared to the volume (sum of degrees) is small. ASC
consists of the following two subroutines: i) compute an approximate Spectral
Embedding via the Power method; and ii) partition the resulting vector set with
an approximate $k$-means clustering algorithm. The resulting $k$-means
partition naturally induces a $k$-way node partition of $G$.
We give a comprehensive analysis of ASC building on the work of Peng et
al.~(SICOMP'17), Boutsidis et al.~(ICML'15) and Ostrovsky et al.~(JACM'13). We
show that ASC i) runs efficiently, and ii) yields a good approximation of an
optimal $k$-way node partition of $G$. Moreover, we strengthen the quality
guarantees of a structural result of Peng et al. by a factor of $k$, and
simultaneously weaken the eigenvalue gap assumption. Further, we show that ASC
finds a $k$-way node partition of $G$ with the strengthened quality guarantees.
| [
{
"created": "Wed, 30 Sep 2015 14:20:44 GMT",
"version": "v1"
},
{
"created": "Fri, 8 Jan 2016 00:09:11 GMT",
"version": "v2"
},
{
"created": "Wed, 17 Feb 2016 14:12:31 GMT",
"version": "v3"
},
{
"created": "Thu, 21 Apr 2016 10:17:59 GMT",
"version": "v4"
},
{
"created": "Sun, 29 Jul 2018 16:58:47 GMT",
"version": "v5"
}
] | 2018-07-31 | [
[
"Kolev",
"Pavel",
""
],
[
"Mehlhorn",
"Kurt",
""
]
] | Approximate Spectral Clustering (ASC) is a popular and successful heuristic for partitioning the nodes of a graph $G$ into clusters for which the ratio of outside connections compared to the volume (sum of degrees) is small. ASC consists of the following two subroutines: i) compute an approximate Spectral Embedding via the Power method; and ii) partition the resulting vector set with an approximate $k$-means clustering algorithm. The resulting $k$-means partition naturally induces a $k$-way node partition of $G$. We give a comprehensive analysis of ASC building on the work of Peng et al.~(SICOMP'17), Boutsidis et al.~(ICML'15) and Ostrovsky et al.~(JACM'13). We show that ASC i) runs efficiently, and ii) yields a good approximation of an optimal $k$-way node partition of $G$. Moreover, we strengthen the quality guarantees of a structural result of Peng et al. by a factor of $k$, and simultaneously weaken the eigenvalue gap assumption. Further, we show that ASC finds a $k$-way node partition of $G$ with the strengthened quality guarantees. |
1810.12445 | Alexandre Cunha | Alexandre Cunha | Geometric Median Shapes | Accepted ISBI'19 | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | We present an algorithm to compute the geometric median of shapes which is
based on the extension of median to high dimensions. The median finding problem
is formulated as an optimization over distances and it is solved directly using
the watershed method as an optimizer. We show that computing the geometric
median of shapes is robust in the presence of outliers and it is superior to
the mean shape which can easily be affected by the presence of outliers. The
geometric median shape thus faithfully represents the true central tendency of
the data, contaminated or not. Our approach can be applied to manifold and non
manifold shapes, with connected or disconnected shapes. The application of
distance transforms and watershed algorithm, two well established constructs of
image processing, lead to an algorithm that can be quickly implemented to
generate fast solutions with linear storage requirements. We demonstrate our
methods in synthetic and natural shapes and compare median and mean results
under increasing contamination by strong outliers.
| [
{
"created": "Mon, 29 Oct 2018 22:50:26 GMT",
"version": "v1"
},
{
"created": "Wed, 7 Nov 2018 20:56:04 GMT",
"version": "v2"
},
{
"created": "Mon, 15 Apr 2019 22:52:25 GMT",
"version": "v3"
}
] | 2019-04-17 | [
[
"Cunha",
"Alexandre",
""
]
] | We present an algorithm to compute the geometric median of shapes which is based on the extension of median to high dimensions. The median finding problem is formulated as an optimization over distances and it is solved directly using the watershed method as an optimizer. We show that computing the geometric median of shapes is robust in the presence of outliers and it is superior to the mean shape which can easily be affected by the presence of outliers. The geometric median shape thus faithfully represents the true central tendency of the data, contaminated or not. Our approach can be applied to manifold and non manifold shapes, with connected or disconnected shapes. The application of distance transforms and watershed algorithm, two well established constructs of image processing, lead to an algorithm that can be quickly implemented to generate fast solutions with linear storage requirements. We demonstrate our methods in synthetic and natural shapes and compare median and mean results under increasing contamination by strong outliers. |
1309.4962 | Josef Urban | Cezary Kaliszyk and Josef Urban | HOL(y)Hammer: Online ATP Service for HOL Light | null | null | null | null | cs.AI cs.DL cs.LG cs.LO cs.MS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | HOL(y)Hammer is an online AI/ATP service for formal (computer-understandable)
mathematics encoded in the HOL Light system. The service allows its users to
upload and automatically process an arbitrary formal development (project)
based on HOL Light, and to attack arbitrary conjectures that use the concepts
defined in some of the uploaded projects. For that, the service uses several
automated reasoning systems combined with several premise selection methods
trained on all the project proofs. The projects that are readily available on
the server for such query answering include the recent versions of the
Flyspeck, Multivariate Analysis and Complex Analysis libraries. The service
runs on a 48-CPU server, currently employing in parallel for each task 7 AI/ATP
combinations and 4 decision procedures that contribute to its overall
performance. The system is also available for local installation by interested
users, who can customize it for their own proof development. An Emacs interface
allowing parallel asynchronous queries to the service is also provided. The
overall structure of the service is outlined, problems that arise and their
solutions are discussed, and an initial account of using the system is given.
| [
{
"created": "Thu, 19 Sep 2013 13:22:31 GMT",
"version": "v1"
}
] | 2013-09-20 | [
[
"Kaliszyk",
"Cezary",
""
],
[
"Urban",
"Josef",
""
]
] | HOL(y)Hammer is an online AI/ATP service for formal (computer-understandable) mathematics encoded in the HOL Light system. The service allows its users to upload and automatically process an arbitrary formal development (project) based on HOL Light, and to attack arbitrary conjectures that use the concepts defined in some of the uploaded projects. For that, the service uses several automated reasoning systems combined with several premise selection methods trained on all the project proofs. The projects that are readily available on the server for such query answering include the recent versions of the Flyspeck, Multivariate Analysis and Complex Analysis libraries. The service runs on a 48-CPU server, currently employing in parallel for each task 7 AI/ATP combinations and 4 decision procedures that contribute to its overall performance. The system is also available for local installation by interested users, who can customize it for their own proof development. An Emacs interface allowing parallel asynchronous queries to the service is also provided. The overall structure of the service is outlined, problems that arise and their solutions are discussed, and an initial account of using the system is given. |
2009.02961 | Cemre Zor | Sara Atito Ali Ahmed, Cemre Zor, Berrin Yanikoglu, Muhammad Awais,
Josef Kittler | Deep Convolutional Neural Network Ensembles using ECOC | 13 pages double column IEEE transactions style | null | 10.1109/ACCESS.2021.3088717 | null | cs.LG stat.ML | http://creativecommons.org/licenses/by/4.0/ | Deep neural networks have enhanced the performance of decision making systems
in many applications including image understanding, and further gains can be
achieved by constructing ensembles. However, designing an ensemble of deep
networks is often not very beneficial since the time needed to train the
networks is very high or the performance gain obtained is not very significant.
In this paper, we analyse error correcting output coding (ECOC) framework to be
used as an ensemble technique for deep networks and propose different design
strategies to address the accuracy-complexity trade-off. We carry out an
extensive comparative study between the introduced ECOC designs and the
state-of-the-art ensemble techniques such as ensemble averaging and gradient
boosting decision trees. Furthermore, we propose a combinatory technique which
is shown to achieve the highest classification performance amongst all.
| [
{
"created": "Mon, 7 Sep 2020 09:20:24 GMT",
"version": "v1"
},
{
"created": "Sun, 7 Mar 2021 16:39:12 GMT",
"version": "v2"
}
] | 2021-11-16 | [
[
"Ahmed",
"Sara Atito Ali",
""
],
[
"Zor",
"Cemre",
""
],
[
"Yanikoglu",
"Berrin",
""
],
[
"Awais",
"Muhammad",
""
],
[
"Kittler",
"Josef",
""
]
] | Deep neural networks have enhanced the performance of decision making systems in many applications including image understanding, and further gains can be achieved by constructing ensembles. However, designing an ensemble of deep networks is often not very beneficial since the time needed to train the networks is very high or the performance gain obtained is not very significant. In this paper, we analyse error correcting output coding (ECOC) framework to be used as an ensemble technique for deep networks and propose different design strategies to address the accuracy-complexity trade-off. We carry out an extensive comparative study between the introduced ECOC designs and the state-of-the-art ensemble techniques such as ensemble averaging and gradient boosting decision trees. Furthermore, we propose a combinatory technique which is shown to achieve the highest classification performance amongst all. |
2406.17419 | Minzheng Wang | Minzheng Wang, Longze Chen, Cheng Fu, Shengyi Liao, Xinghua Zhang,
Bingli Wu, Haiyang Yu, Nan Xu, Lei Zhang, Run Luo, Yunshui Li, Min Yang, Fei
Huang, Yongbin Li | Leave No Document Behind: Benchmarking Long-Context LLMs with Extended
Multi-Doc QA | We release our code and data publicly at
https://github.com/MozerWang/Loong | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Long-context modeling capabilities have garnered widespread attention,
leading to the emergence of Large Language Models (LLMs) with ultra-context
windows. Meanwhile, benchmarks for evaluating long-context LLMs are gradually
catching up. However, existing benchmarks employ irrelevant noise texts to
artificially extend the length of test cases, diverging from the real-world
scenarios of long-context applications. To bridge this gap, we propose a novel
long-context benchmark, Loong, aligning with realistic scenarios through
extended multi-document question answering (QA). Unlike typical document QA, in
Loong's test cases, each document is relevant to the final answer, ignoring any
document will lead to the failure of the answer. Furthermore, Loong introduces
four types of tasks with a range of context lengths: Spotlight Locating,
Comparison, Clustering, and Chain of Reasoning, to facilitate a more realistic
and comprehensive evaluation of long-context understanding. Extensive
experiments indicate that existing long-context language models still exhibit
considerable potential for enhancement. Retrieval augmented generation (RAG)
achieves poor performance, demonstrating that Loong can reliably assess the
model's long-context modeling capabilities.
| [
{
"created": "Tue, 25 Jun 2024 09:42:56 GMT",
"version": "v1"
}
] | 2024-06-26 | [
[
"Wang",
"Minzheng",
""
],
[
"Chen",
"Longze",
""
],
[
"Fu",
"Cheng",
""
],
[
"Liao",
"Shengyi",
""
],
[
"Zhang",
"Xinghua",
""
],
[
"Wu",
"Bingli",
""
],
[
"Yu",
"Haiyang",
""
],
[
"Xu",
"Nan",
""
],
[
"Zhang",
"Lei",
""
],
[
"Luo",
"Run",
""
],
[
"Li",
"Yunshui",
""
],
[
"Yang",
"Min",
""
],
[
"Huang",
"Fei",
""
],
[
"Li",
"Yongbin",
""
]
] | Long-context modeling capabilities have garnered widespread attention, leading to the emergence of Large Language Models (LLMs) with ultra-context windows. Meanwhile, benchmarks for evaluating long-context LLMs are gradually catching up. However, existing benchmarks employ irrelevant noise texts to artificially extend the length of test cases, diverging from the real-world scenarios of long-context applications. To bridge this gap, we propose a novel long-context benchmark, Loong, aligning with realistic scenarios through extended multi-document question answering (QA). Unlike typical document QA, in Loong's test cases, each document is relevant to the final answer, ignoring any document will lead to the failure of the answer. Furthermore, Loong introduces four types of tasks with a range of context lengths: Spotlight Locating, Comparison, Clustering, and Chain of Reasoning, to facilitate a more realistic and comprehensive evaluation of long-context understanding. Extensive experiments indicate that existing long-context language models still exhibit considerable potential for enhancement. Retrieval augmented generation (RAG) achieves poor performance, demonstrating that Loong can reliably assess the model's long-context modeling capabilities. |
2312.03740 | Prabin Bhandari | Prabin Bhandari | A Survey on Prompting Techniques in LLMs | 10 pages, 4 Figures | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | Autoregressive Large Language Models have transformed the landscape of
Natural Language Processing. Pre-train and prompt paradigm has replaced the
conventional approach of pre-training and fine-tuning for many downstream NLP
tasks. This shift has been possible largely due to LLMs and innovative
prompting techniques. LLMs have shown great promise for a variety of downstream
tasks owing to their vast parameters and huge datasets that they are
pre-trained on. However, in order to fully realize their potential, their
outputs must be guided towards the desired outcomes. Prompting, in which a
specific input or instruction is provided to guide the LLMs toward the intended
output, has become a tool for achieving this goal. In this paper, we discuss
the various prompting techniques that have been applied to fully harness the
power of LLMs. We present a taxonomy of existing literature on prompting
techniques and provide a concise survey based on this taxonomy. Further, we
identify some open problems in the realm of prompting in autoregressive LLMs
which could serve as a direction for future research.
| [
{
"created": "Tue, 28 Nov 2023 17:56:34 GMT",
"version": "v1"
},
{
"created": "Tue, 16 Apr 2024 22:27:39 GMT",
"version": "v2"
}
] | 2024-04-18 | [
[
"Bhandari",
"Prabin",
""
]
] | Autoregressive Large Language Models have transformed the landscape of Natural Language Processing. Pre-train and prompt paradigm has replaced the conventional approach of pre-training and fine-tuning for many downstream NLP tasks. This shift has been possible largely due to LLMs and innovative prompting techniques. LLMs have shown great promise for a variety of downstream tasks owing to their vast parameters and huge datasets that they are pre-trained on. However, in order to fully realize their potential, their outputs must be guided towards the desired outcomes. Prompting, in which a specific input or instruction is provided to guide the LLMs toward the intended output, has become a tool for achieving this goal. In this paper, we discuss the various prompting techniques that have been applied to fully harness the power of LLMs. We present a taxonomy of existing literature on prompting techniques and provide a concise survey based on this taxonomy. Further, we identify some open problems in the realm of prompting in autoregressive LLMs which could serve as a direction for future research. |
2307.05601 | Artem Bituitskii | Artem Bituitskii | Unsupervised Domain Adaptation with Deep Neural-Network | Master's thesis, 34 pages, 13 figures | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This report contributes to the field of unsupervised domain adaptation by
providing an analysis of existing methods, introducing a new approach, and
demonstrating the potential for improving visual recognition tasks across
different domains. The results of this study open up opportunities for further
study and development of advanced methods in the field of domain adaptation.
| [
{
"created": "Mon, 10 Jul 2023 20:28:58 GMT",
"version": "v1"
}
] | 2023-07-13 | [
[
"Bituitskii",
"Artem",
""
]
] | This report contributes to the field of unsupervised domain adaptation by providing an analysis of existing methods, introducing a new approach, and demonstrating the potential for improving visual recognition tasks across different domains. The results of this study open up opportunities for further study and development of advanced methods in the field of domain adaptation. |
1607.02641 | Dominik Wurzer Dominik Wurzer | Dominik Wurzer, Miles Osborne, Victor Lavrenko | Randomised Relevance Model | Information Retrieval, Query Expansion, Locality Sensitive Hashing,
Randomized Algorithm, Relevance Model | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Relevance Models are well-known retrieval models and capable of producing
competitive results. However, because they use query expansion they can be very
slow. We address this slowness by incorporating two variants of locality
sensitive hashing (LSH) into the query expansion process. Results on two
document collections suggest that we can obtain large reductions in the amount
of work, with a small reduction in effectiveness. Our approach is shown to be
additive when pruning query terms.
| [
{
"created": "Sat, 9 Jul 2016 18:10:06 GMT",
"version": "v1"
}
] | 2016-07-12 | [
[
"Wurzer",
"Dominik",
""
],
[
"Osborne",
"Miles",
""
],
[
"Lavrenko",
"Victor",
""
]
] | Relevance Models are well-known retrieval models and capable of producing competitive results. However, because they use query expansion they can be very slow. We address this slowness by incorporating two variants of locality sensitive hashing (LSH) into the query expansion process. Results on two document collections suggest that we can obtain large reductions in the amount of work, with a small reduction in effectiveness. Our approach is shown to be additive when pruning query terms. |
2012.11405 | Sophia Althammer | Sophia Althammer, Sebastian Hofst\"atter, Allan Hanbury | Cross-domain Retrieval in the Legal and Patent Domains: a
Reproducibility Study | Accepted at ECIR 2021 (Reproducibility paper track) | null | null | null | cs.IR | http://creativecommons.org/licenses/by/4.0/ | Domain specific search has always been a challenging information retrieval
task due to several challenges such as the domain specific language, the unique
task setting, as well as the lack of accessible queries and corresponding
relevance judgements. In the last years, pretrained language models, such as
BERT, revolutionized web and news search. Naturally, the community aims to
adapt these advancements to cross-domain transfer of retrieval models for
domain specific search. In the context of legal document retrieval, Shao et al.
propose the BERT-PLI framework by modeling the Paragraph Level Interactions
with the language model BERT. In this paper we reproduce the original
experiments, we clarify pre-processing steps, add missing scripts for framework
steps and investigate different evaluation approaches, however we are not able
to reproduce the evaluation results. Contrary to the original paper, we
demonstrate that the domain specific paragraph-level modelling does not appear
to help the performance of the BERT-PLI model compared to paragraph-level
modelling with the original BERT. In addition to our legal search
reproducibility study, we investigate BERT-PLI for document retrieval in the
patent domain. We find that the BERT-PLI model does not yet achieve performance
improvements for patent document retrieval compared to the BM25 baseline.
Furthermore, we evaluate the BERT-PLI model for cross-domain retrieval between
the legal and patent domain on individual components, both on a paragraph and
document-level. We find that the transfer of the BERT-PLI model on the
paragraph-level leads to comparable results between both domains as well as
first promising results for the cross-domain transfer on the document-level.
For reproducibility and transparency as well as to benefit the community we
make our source code and the trained models publicly available.
| [
{
"created": "Mon, 21 Dec 2020 15:06:15 GMT",
"version": "v1"
},
{
"created": "Tue, 19 Jan 2021 09:18:44 GMT",
"version": "v2"
}
] | 2021-01-20 | [
[
"Althammer",
"Sophia",
""
],
[
"Hofstätter",
"Sebastian",
""
],
[
"Hanbury",
"Allan",
""
]
] | Domain specific search has always been a challenging information retrieval task due to several challenges such as the domain specific language, the unique task setting, as well as the lack of accessible queries and corresponding relevance judgements. In the last years, pretrained language models, such as BERT, revolutionized web and news search. Naturally, the community aims to adapt these advancements to cross-domain transfer of retrieval models for domain specific search. In the context of legal document retrieval, Shao et al. propose the BERT-PLI framework by modeling the Paragraph Level Interactions with the language model BERT. In this paper we reproduce the original experiments, we clarify pre-processing steps, add missing scripts for framework steps and investigate different evaluation approaches, however we are not able to reproduce the evaluation results. Contrary to the original paper, we demonstrate that the domain specific paragraph-level modelling does not appear to help the performance of the BERT-PLI model compared to paragraph-level modelling with the original BERT. In addition to our legal search reproducibility study, we investigate BERT-PLI for document retrieval in the patent domain. We find that the BERT-PLI model does not yet achieve performance improvements for patent document retrieval compared to the BM25 baseline. Furthermore, we evaluate the BERT-PLI model for cross-domain retrieval between the legal and patent domain on individual components, both on a paragraph and document-level. We find that the transfer of the BERT-PLI model on the paragraph-level leads to comparable results between both domains as well as first promising results for the cross-domain transfer on the document-level. For reproducibility and transparency as well as to benefit the community we make our source code and the trained models publicly available. |
2106.07504 | Ulrich A\"ivodji | Ulrich A\"ivodji, Hiromi Arai, S\'ebastien Gambs, Satoshi Hara | Characterizing the risk of fairwashing | Accepted to NeurIPS 2021 | null | null | null | cs.LG cs.CY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Fairwashing refers to the risk that an unfair black-box model can be
explained by a fairer model through post-hoc explanation manipulation. In this
paper, we investigate the capability of fairwashing attacks by analyzing their
fidelity-unfairness trade-offs. In particular, we show that fairwashed
explanation models can generalize beyond the suing group (i.e., data points
that are being explained), meaning that a fairwashed explainer can be used to
rationalize subsequent unfair decisions of a black-box model. We also
demonstrate that fairwashing attacks can transfer across black-box models,
meaning that other black-box models can perform fairwashing without explicitly
using their predictions. This generalization and transferability of fairwashing
attacks imply that their detection will be difficult in practice. Finally, we
propose an approach to quantify the risk of fairwashing, which is based on the
computation of the range of the unfairness of high-fidelity explainers.
| [
{
"created": "Mon, 14 Jun 2021 15:33:17 GMT",
"version": "v1"
},
{
"created": "Tue, 2 Nov 2021 14:03:53 GMT",
"version": "v2"
},
{
"created": "Wed, 3 Nov 2021 02:01:22 GMT",
"version": "v3"
}
] | 2021-11-04 | [
[
"Aïvodji",
"Ulrich",
""
],
[
"Arai",
"Hiromi",
""
],
[
"Gambs",
"Sébastien",
""
],
[
"Hara",
"Satoshi",
""
]
] | Fairwashing refers to the risk that an unfair black-box model can be explained by a fairer model through post-hoc explanation manipulation. In this paper, we investigate the capability of fairwashing attacks by analyzing their fidelity-unfairness trade-offs. In particular, we show that fairwashed explanation models can generalize beyond the suing group (i.e., data points that are being explained), meaning that a fairwashed explainer can be used to rationalize subsequent unfair decisions of a black-box model. We also demonstrate that fairwashing attacks can transfer across black-box models, meaning that other black-box models can perform fairwashing without explicitly using their predictions. This generalization and transferability of fairwashing attacks imply that their detection will be difficult in practice. Finally, we propose an approach to quantify the risk of fairwashing, which is based on the computation of the range of the unfairness of high-fidelity explainers. |
1908.03687 | Zhanat Kappassov | Zhanat Kappassov, Daulet Baimukashev, Zharaskhan Kuanyshuly, Yerzhan
Massalin, Arshat Urazbayev, Huseyin Atakan Varol | Color-Coded Fiber-Optic Tactile Sensor for an Elastomeric Robot Skin | Presented at ICRA2019, Montreal | 2019 International Conference on Robotics and Automation (ICRA),
Montreal, QC, Canada, 2019, pp. 2146-2152 | 10.1109/ICRA.2019.8793262 | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The sense of touch is essential for reliable mapping between the environment
and a robot which interacts physically with objects. Presumably, an artificial
tactile skin would facilitate safe interaction of the robots with the
environment. In this work, we present our color-coded tactile sensor,
incorporating plastic optical fibers (POF), transparent silicone rubber and an
off-the-shelf color camera. Processing electronics are placed away from the
sensing surface to make the sensor robust to harsh environments. Contact
localization is possible thanks to the lower number of light sources compared
to the number of camera POFs. Classical machine learning techniques and a
hierarchical classification scheme were used for contact localization.
Specifically, we generated the mapping from stimulation to sensation of a
robotic perception system using our sensor. We achieved a force sensing range
up to 18 N with the force resolution of around 3.6~N and the spatial resolution
of 8~mm. The color-coded tactile sensor is suitable for tactile exploration and
might enable further innovations in robust tactile sensing.
| [
{
"created": "Sat, 10 Aug 2019 04:35:02 GMT",
"version": "v1"
}
] | 2019-08-21 | [
[
"Kappassov",
"Zhanat",
""
],
[
"Baimukashev",
"Daulet",
""
],
[
"Kuanyshuly",
"Zharaskhan",
""
],
[
"Massalin",
"Yerzhan",
""
],
[
"Urazbayev",
"Arshat",
""
],
[
"Varol",
"Huseyin Atakan",
""
]
] | The sense of touch is essential for reliable mapping between the environment and a robot which interacts physically with objects. Presumably, an artificial tactile skin would facilitate safe interaction of the robots with the environment. In this work, we present our color-coded tactile sensor, incorporating plastic optical fibers (POF), transparent silicone rubber and an off-the-shelf color camera. Processing electronics are placed away from the sensing surface to make the sensor robust to harsh environments. Contact localization is possible thanks to the lower number of light sources compared to the number of camera POFs. Classical machine learning techniques and a hierarchical classification scheme were used for contact localization. Specifically, we generated the mapping from stimulation to sensation of a robotic perception system using our sensor. We achieved a force sensing range up to 18 N with the force resolution of around 3.6~N and the spatial resolution of 8~mm. The color-coded tactile sensor is suitable for tactile exploration and might enable further innovations in robust tactile sensing. |
2305.08350 | Quanquan Gu | Yue Wu and Jiafan He and Quanquan Gu | Uniform-PAC Guarantees for Model-Based RL with Bounded Eluder Dimension | 21 pages, 1 table. To appear in UAI 2023 | null | null | null | cs.LG math.OC stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, there has been remarkable progress in reinforcement learning (RL)
with general function approximation. However, all these works only provide
regret or sample complexity guarantees. It is still an open question if one can
achieve stronger performance guarantees, i.e., the uniform probably approximate
correctness (Uniform-PAC) guarantee that can imply both a sub-linear regret
bound and a polynomial sample complexity for any target learning accuracy. We
study this problem by proposing algorithms for both nonlinear bandits and
model-based episodic RL using the general function class with a bounded eluder
dimension. The key idea of the proposed algorithms is to assign each action to
different levels according to its width with respect to the confidence set. The
achieved uniform-PAC sample complexity is tight in the sense that it matches
the state-of-the-art regret bounds or sample complexity guarantees when reduced
to the linear case. To the best of our knowledge, this is the first work for
uniform-PAC guarantees on bandit and RL that goes beyond linear cases.
| [
{
"created": "Mon, 15 May 2023 05:07:45 GMT",
"version": "v1"
}
] | 2023-05-16 | [
[
"Wu",
"Yue",
""
],
[
"He",
"Jiafan",
""
],
[
"Gu",
"Quanquan",
""
]
] | Recently, there has been remarkable progress in reinforcement learning (RL) with general function approximation. However, all these works only provide regret or sample complexity guarantees. It is still an open question if one can achieve stronger performance guarantees, i.e., the uniform probably approximate correctness (Uniform-PAC) guarantee that can imply both a sub-linear regret bound and a polynomial sample complexity for any target learning accuracy. We study this problem by proposing algorithms for both nonlinear bandits and model-based episodic RL using the general function class with a bounded eluder dimension. The key idea of the proposed algorithms is to assign each action to different levels according to its width with respect to the confidence set. The achieved uniform-PAC sample complexity is tight in the sense that it matches the state-of-the-art regret bounds or sample complexity guarantees when reduced to the linear case. To the best of our knowledge, this is the first work for uniform-PAC guarantees on bandit and RL that goes beyond linear cases. |
0808.2953 | Paul Tarau | Paul Tarau | Declarative Combinatorics: Isomorphisms, Hylomorphisms and Hereditarily
Finite Data Types in Haskell | unpublished draft, revision 3, added various new encodings, with
focus on primes and multisets, now 104 pages | null | null | null | cs.PL cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper is an exploration in a functional programming framework of {\em
isomorphisms} between elementary data types (natural numbers, sets, multisets,
finite functions, permutations binary decision diagrams, graphs, hypergraphs,
parenthesis languages, dyadic rationals, primes, DNA sequences etc.) and their
extension to hereditarily finite universes through {\em hylomorphisms} derived
from {\em ranking/unranking} and {\em pairing/unpairing} operations.
An embedded higher order {\em combinator language} provides any-to-any
encodings automatically.
Besides applications to experimental mathematics, a few examples of ``free
algorithms'' obtained by transferring operations between data types are shown.
Other applications range from stream iterators on combinatorial objects to
self-delimiting codes, succinct data representations and generation of random
instances.
The paper covers 59 data types and, through the use of the embedded
combinator language, provides 3540 distinct bijective transformations between
them.
The self-contained source code of the paper, as generated from a literate
Haskell program, is available at
\url{http://logic.csci.unt.edu/tarau/research/2008/fISO.zip}.
{\bf Keywords}: Haskell data representations, data type isomorphisms,
declarative combinatorics, computational mathematics, Ackermann encoding,
G\"{o}del numberings, arithmetization, ranking/unranking, hereditarily finite
sets, functions and permutations, encodings of binary decision diagrams, dyadic
rationals, DNA encodings
| [
{
"created": "Thu, 21 Aug 2008 16:47:38 GMT",
"version": "v1"
},
{
"created": "Mon, 20 Oct 2008 18:47:59 GMT",
"version": "v2"
},
{
"created": "Tue, 9 Dec 2008 01:28:15 GMT",
"version": "v3"
},
{
"created": "Mon, 19 Jan 2009 19:39:51 GMT",
"version": "v4"
}
] | 2009-01-19 | [
[
"Tarau",
"Paul",
""
]
] | This paper is an exploration in a functional programming framework of {\em isomorphisms} between elementary data types (natural numbers, sets, multisets, finite functions, permutations binary decision diagrams, graphs, hypergraphs, parenthesis languages, dyadic rationals, primes, DNA sequences etc.) and their extension to hereditarily finite universes through {\em hylomorphisms} derived from {\em ranking/unranking} and {\em pairing/unpairing} operations. An embedded higher order {\em combinator language} provides any-to-any encodings automatically. Besides applications to experimental mathematics, a few examples of ``free algorithms'' obtained by transferring operations between data types are shown. Other applications range from stream iterators on combinatorial objects to self-delimiting codes, succinct data representations and generation of random instances. The paper covers 59 data types and, through the use of the embedded combinator language, provides 3540 distinct bijective transformations between them. The self-contained source code of the paper, as generated from a literate Haskell program, is available at \url{http://logic.csci.unt.edu/tarau/research/2008/fISO.zip}. {\bf Keywords}: Haskell data representations, data type isomorphisms, declarative combinatorics, computational mathematics, Ackermann encoding, G\"{o}del numberings, arithmetization, ranking/unranking, hereditarily finite sets, functions and permutations, encodings of binary decision diagrams, dyadic rationals, DNA encodings |
1312.6948 | Sourish Dasgupta | Sourish Dasgupta, Rupali KaPatel, Ankur Padia, Kushal Shah | Description Logics based Formalization of Wh-Queries | Natural Language Query Processing, Representation | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The problem of Natural Language Query Formalization (NLQF) is to translate a
given user query in natural language (NL) into a formal language so that the
semantic interpretation has equivalence with the NL interpretation.
Formalization of NL queries enables logic based reasoning during information
retrieval, database query, question-answering, etc. Formalization also helps in
Web query normalization and indexing, query intent analysis, etc. In this paper
we are proposing a Description Logics based formal methodology for wh-query
intent (also called desire) identification and corresponding formal
translation. We evaluated the scalability of our proposed formalism using
Microsoft Encarta 98 query dataset and OWL-S TC v.4.0 dataset.
| [
{
"created": "Wed, 25 Dec 2013 09:23:49 GMT",
"version": "v1"
}
] | 2013-12-30 | [
[
"Dasgupta",
"Sourish",
""
],
[
"KaPatel",
"Rupali",
""
],
[
"Padia",
"Ankur",
""
],
[
"Shah",
"Kushal",
""
]
] | The problem of Natural Language Query Formalization (NLQF) is to translate a given user query in natural language (NL) into a formal language so that the semantic interpretation has equivalence with the NL interpretation. Formalization of NL queries enables logic based reasoning during information retrieval, database query, question-answering, etc. Formalization also helps in Web query normalization and indexing, query intent analysis, etc. In this paper we are proposing a Description Logics based formal methodology for wh-query intent (also called desire) identification and corresponding formal translation. We evaluated the scalability of our proposed formalism using Microsoft Encarta 98 query dataset and OWL-S TC v.4.0 dataset. |
1401.0583 | Garrett Warnell | Garrett Warnell, Sourabh Bhattacharya, Rama Chellappa, Tamer Basar | Adaptive-Rate Compressive Sensing Using Side Information | null | null | 10.1109/TIP.2015.2456425 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We provide two novel adaptive-rate compressive sensing (CS) strategies for
sparse, time-varying signals using side information. Our first method utilizes
extra cross-validation measurements, and the second one exploits extra
low-resolution measurements. Unlike the majority of current CS techniques, we
do not assume that we know an upper bound on the number of significant
coefficients that comprise the images in the video sequence. Instead, we use
the side information to predict the number of significant coefficients in the
signal at the next time instant. For each image in the video sequence, our
techniques specify a fixed number of spatially-multiplexed CS measurements to
acquire, and adjust this quantity from image to image. Our strategies are
developed in the specific context of background subtraction for surveillance
video, and we experimentally validate the proposed methods on real video
sequences.
| [
{
"created": "Fri, 3 Jan 2014 04:01:29 GMT",
"version": "v1"
}
] | 2023-07-19 | [
[
"Warnell",
"Garrett",
""
],
[
"Bhattacharya",
"Sourabh",
""
],
[
"Chellappa",
"Rama",
""
],
[
"Basar",
"Tamer",
""
]
] | We provide two novel adaptive-rate compressive sensing (CS) strategies for sparse, time-varying signals using side information. Our first method utilizes extra cross-validation measurements, and the second one exploits extra low-resolution measurements. Unlike the majority of current CS techniques, we do not assume that we know an upper bound on the number of significant coefficients that comprise the images in the video sequence. Instead, we use the side information to predict the number of significant coefficients in the signal at the next time instant. For each image in the video sequence, our techniques specify a fixed number of spatially-multiplexed CS measurements to acquire, and adjust this quantity from image to image. Our strategies are developed in the specific context of background subtraction for surveillance video, and we experimentally validate the proposed methods on real video sequences. |
0902.2853 | Laurent Poinsot | Laurent Poinsot (LIPN), G\'erard Duchamp (LIPN) | A formal calculus on the Riordan near algebra | 29 p | Advances and Applications in Discrete Mathematics 6, 1 (2010)
11-44 | null | null | cs.SC math.CO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Riordan group is the semi-direct product of a multiplicative group of
invertible series and a group, under substitution, of non units. The Riordan
near algebra, as introduced in this paper, is the Cartesian product of the
algebra of formal power series and its principal ideal of non units, equipped
with a product that extends the multiplication of the Riordan group. The later
is naturally embedded as a subgroup of units into the former. In this paper, we
prove the existence of a formal calculus on the Riordan algebra. This formal
calculus plays a role similar to those of holomorphic calculi in the Banach or
Fr\'echet algebras setting, but without the constraint of a radius of
convergence. Using this calculus, we define \emph{en passant} a notion of
generalized powers in the Riordan group.
| [
{
"created": "Tue, 17 Feb 2009 08:13:27 GMT",
"version": "v1"
},
{
"created": "Thu, 5 Mar 2009 11:10:13 GMT",
"version": "v2"
},
{
"created": "Sun, 28 Feb 2010 20:01:23 GMT",
"version": "v3"
},
{
"created": "Thu, 4 Mar 2010 07:28:28 GMT",
"version": "v4"
}
] | 2010-09-30 | [
[
"Poinsot",
"Laurent",
"",
"LIPN"
],
[
"Duchamp",
"Gérard",
"",
"LIPN"
]
] | The Riordan group is the semi-direct product of a multiplicative group of invertible series and a group, under substitution, of non units. The Riordan near algebra, as introduced in this paper, is the Cartesian product of the algebra of formal power series and its principal ideal of non units, equipped with a product that extends the multiplication of the Riordan group. The later is naturally embedded as a subgroup of units into the former. In this paper, we prove the existence of a formal calculus on the Riordan algebra. This formal calculus plays a role similar to those of holomorphic calculi in the Banach or Fr\'echet algebras setting, but without the constraint of a radius of convergence. Using this calculus, we define \emph{en passant} a notion of generalized powers in the Riordan group. |
2101.09577 | Bla\v{z} \v{S}krlj | Bla\v{z} \v{S}krlj, Sa\v{s}o D\v{z}eroski, Nada Lavra\v{c} and Matej
Petkovi\'c | ReliefE: Feature Ranking in High-dimensional Spaces via Manifold
Embeddings | null | null | null | null | cs.LG stat.ML | http://creativecommons.org/licenses/by/4.0/ | Feature ranking has been widely adopted in machine learning applications such
as high-throughput biology and social sciences. The approaches of the popular
Relief family of algorithms assign importances to features by iteratively
accounting for nearest relevant and irrelevant instances. Despite their high
utility, these algorithms can be computationally expensive and not-well suited
for high-dimensional sparse input spaces. In contrast, recent embedding-based
methods learn compact, low-dimensional representations, potentially
facilitating down-stream learning capabilities of conventional learners. This
paper explores how the Relief branch of algorithms can be adapted to benefit
from (Riemannian) manifold-based embeddings of instance and target spaces,
where a given embedding's dimensionality is intrinsic to the dimensionality of
the considered data set. The developed ReliefE algorithm is faster and can
result in better feature rankings, as shown by our evaluation on 20 real-life
data sets for multi-class and multi-label classification tasks. The utility of
ReliefE for high-dimensional data sets is ensured by its implementation that
utilizes sparse matrix algebraic operations. Finally, the relation of ReliefE
to other ranking algorithms is studied via the Fuzzy Jaccard Index.
| [
{
"created": "Sat, 23 Jan 2021 20:23:31 GMT",
"version": "v1"
}
] | 2021-01-26 | [
[
"Škrlj",
"Blaž",
""
],
[
"Džeroski",
"Sašo",
""
],
[
"Lavrač",
"Nada",
""
],
[
"Petković",
"Matej",
""
]
] | Feature ranking has been widely adopted in machine learning applications such as high-throughput biology and social sciences. The approaches of the popular Relief family of algorithms assign importances to features by iteratively accounting for nearest relevant and irrelevant instances. Despite their high utility, these algorithms can be computationally expensive and not-well suited for high-dimensional sparse input spaces. In contrast, recent embedding-based methods learn compact, low-dimensional representations, potentially facilitating down-stream learning capabilities of conventional learners. This paper explores how the Relief branch of algorithms can be adapted to benefit from (Riemannian) manifold-based embeddings of instance and target spaces, where a given embedding's dimensionality is intrinsic to the dimensionality of the considered data set. The developed ReliefE algorithm is faster and can result in better feature rankings, as shown by our evaluation on 20 real-life data sets for multi-class and multi-label classification tasks. The utility of ReliefE for high-dimensional data sets is ensured by its implementation that utilizes sparse matrix algebraic operations. Finally, the relation of ReliefE to other ranking algorithms is studied via the Fuzzy Jaccard Index. |
1212.1763 | Mohammadi Akheela Khanum Mrs | Mohammadi Akheela Khanum, Lamia Mohammed Ketari | Trends in Combating Image Spam E-mails | null | ICFIT 2012 | null | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the rapid adoption of Internet as an easy way to communicate, the amount
of unsolicited e-mails, known as spam e-mails, has been growing rapidly. The
major problem of spam e-mails is the loss of productivity and a drain on IT
resources. Today, we receive spam more rapidly than the legitimate e-mails.
Initially, spam e-mails contained only textual messages which were easily
detected by the text-based spam filters. To evade such detection, spammers came
up with a new sophisticated technique called image spam. Image spam consists in
embedding the advertisement text in images rather than in the body of the
e-mail, yet the image contents are not detected by most spam filters. In this
paper, we examine the motivations and the challenges in image spam filtering
research, and we review the recent trends in combating image spam e-mails. The
review indicates that spamming is a business model and spammers are becoming
more sophisticated in their approach to adapt to all challenges, and hence,
defeating the conventional spam filtering technologies. Therefore, image spam
detection techniques should be scalable and adaptable to meet the future
tactics of the spammers.
| [
{
"created": "Sat, 8 Dec 2012 07:32:10 GMT",
"version": "v1"
}
] | 2012-12-11 | [
[
"Khanum",
"Mohammadi Akheela",
""
],
[
"Ketari",
"Lamia Mohammed",
""
]
] | With the rapid adoption of Internet as an easy way to communicate, the amount of unsolicited e-mails, known as spam e-mails, has been growing rapidly. The major problem of spam e-mails is the loss of productivity and a drain on IT resources. Today, we receive spam more rapidly than the legitimate e-mails. Initially, spam e-mails contained only textual messages which were easily detected by the text-based spam filters. To evade such detection, spammers came up with a new sophisticated technique called image spam. Image spam consists in embedding the advertisement text in images rather than in the body of the e-mail, yet the image contents are not detected by most spam filters. In this paper, we examine the motivations and the challenges in image spam filtering research, and we review the recent trends in combating image spam e-mails. The review indicates that spamming is a business model and spammers are becoming more sophisticated in their approach to adapt to all challenges, and hence, defeating the conventional spam filtering technologies. Therefore, image spam detection techniques should be scalable and adaptable to meet the future tactics of the spammers. |
1812.04443 | Alexander Span | Alexander Span, Vahid Aref, Henning Buelow, Stephan ten Brink | Time-Bandwidth Product Perspective for Multi-Soliton Phase Modulation | null | null | 10.1109/TCOMM.2019.2913870 | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multi-soliton pulses are potential candidates for fiber optical transmission
where the information is modulated and recovered in the so-called nonlinear
Fourier domain. While this is an elegant technique to account for the channel
nonlinearity, the obtained spectral efficiency, so far, is not competitive with
classic Nyquist-based schemes. This is especially due to the observation that
soliton pulses generally exhibit a large time-bandwidth product. We consider
the phase modulation of spectral amplitudes of higher order solitons, taking
into account their varying spectral and temporal behavior when propagating
along the fiber. For second and third order solitons, we numerically optimize
the pulse shapes to minimize the time-bandwidth product. We study the behavior
of multi-soliton pulse duration and bandwidth and generally observe two corner
cases where we approximate them analytically. We use these results to give an
estimate on the minimal achievable time-bandwidth product per eigenvalue.
| [
{
"created": "Tue, 11 Dec 2018 14:55:10 GMT",
"version": "v1"
}
] | 2019-06-13 | [
[
"Span",
"Alexander",
""
],
[
"Aref",
"Vahid",
""
],
[
"Buelow",
"Henning",
""
],
[
"Brink",
"Stephan ten",
""
]
] | Multi-soliton pulses are potential candidates for fiber optical transmission where the information is modulated and recovered in the so-called nonlinear Fourier domain. While this is an elegant technique to account for the channel nonlinearity, the obtained spectral efficiency, so far, is not competitive with classic Nyquist-based schemes. This is especially due to the observation that soliton pulses generally exhibit a large time-bandwidth product. We consider the phase modulation of spectral amplitudes of higher order solitons, taking into account their varying spectral and temporal behavior when propagating along the fiber. For second and third order solitons, we numerically optimize the pulse shapes to minimize the time-bandwidth product. We study the behavior of multi-soliton pulse duration and bandwidth and generally observe two corner cases where we approximate them analytically. We use these results to give an estimate on the minimal achievable time-bandwidth product per eigenvalue. |
1811.09556 | Yan Wu | Yan Wu, Greg Wayne, Karol Gregor, Timothy Lillicrap | Learning Attractor Dynamics for Generative Memory | null | null | null | null | cs.LG cs.AI stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A central challenge faced by memory systems is the robust retrieval of a
stored pattern in the presence of interference due to other stored patterns and
noise. A theoretically well-founded solution to robust retrieval is given by
attractor dynamics, which iteratively clean up patterns during recall. However,
incorporating attractor dynamics into modern deep learning systems poses
difficulties: attractor basins are characterised by vanishing gradients, which
are known to make training neural networks difficult. In this work, we avoid
the vanishing gradient problem by training a generative distributed memory
without simulating the attractor dynamics. Based on the idea of memory writing
as inference, as proposed in the Kanerva Machine, we show that a
likelihood-based Lyapunov function emerges from maximising the variational
lower-bound of a generative memory. Experiments shows it converges to correct
patterns upon iterative retrieval and achieves competitive performance as both
a memory model and a generative model.
| [
{
"created": "Fri, 23 Nov 2018 16:49:02 GMT",
"version": "v1"
}
] | 2018-11-26 | [
[
"Wu",
"Yan",
""
],
[
"Wayne",
"Greg",
""
],
[
"Gregor",
"Karol",
""
],
[
"Lillicrap",
"Timothy",
""
]
] | A central challenge faced by memory systems is the robust retrieval of a stored pattern in the presence of interference due to other stored patterns and noise. A theoretically well-founded solution to robust retrieval is given by attractor dynamics, which iteratively clean up patterns during recall. However, incorporating attractor dynamics into modern deep learning systems poses difficulties: attractor basins are characterised by vanishing gradients, which are known to make training neural networks difficult. In this work, we avoid the vanishing gradient problem by training a generative distributed memory without simulating the attractor dynamics. Based on the idea of memory writing as inference, as proposed in the Kanerva Machine, we show that a likelihood-based Lyapunov function emerges from maximising the variational lower-bound of a generative memory. Experiments shows it converges to correct patterns upon iterative retrieval and achieves competitive performance as both a memory model and a generative model. |
2310.17168 | Carson Eisenach | Sohrab Andaz, Carson Eisenach, Dhruv Madeka, Kari Torkkola, Randy Jia,
Dean Foster, Sham Kakade | Learning an Inventory Control Policy with General Inventory Arrival
Dynamics | null | null | null | null | cs.LG stat.ML | http://creativecommons.org/licenses/by/4.0/ | In this paper we address the problem of learning and backtesting inventory
control policies in the presence of general arrival dynamics -- which we term
as a quantity-over-time arrivals model (QOT). We also allow for order
quantities to be modified as a post-processing step to meet vendor constraints
such as order minimum and batch size constraints -- a common practice in real
supply chains. To the best of our knowledge this is the first work to handle
either arbitrary arrival dynamics or an arbitrary downstream post-processing of
order quantities. Building upon recent work (Madeka et al., 2022) we similarly
formulate the periodic review inventory control problem as an exogenous
decision process, where most of the state is outside the control of the agent.
Madeka et al., 2022 show how to construct a simulator that replays historic
data to solve this class of problem. In our case, we incorporate a deep
generative model for the arrivals process as part of the history replay. By
formulating the problem as an exogenous decision process, we can apply results
from Madeka et al., 2022 to obtain a reduction to supervised learning. Via
simulation studies we show that this approach yields statistically significant
improvements in profitability over production baselines. Using data from a
real-world A/B test, we show that Gen-QOT generalizes well to off-policy data
and that the resulting buying policy outperforms traditional inventory
management systems in real world settings.
| [
{
"created": "Thu, 26 Oct 2023 05:49:13 GMT",
"version": "v1"
},
{
"created": "Mon, 22 Jan 2024 00:12:20 GMT",
"version": "v2"
}
] | 2024-01-23 | [
[
"Andaz",
"Sohrab",
""
],
[
"Eisenach",
"Carson",
""
],
[
"Madeka",
"Dhruv",
""
],
[
"Torkkola",
"Kari",
""
],
[
"Jia",
"Randy",
""
],
[
"Foster",
"Dean",
""
],
[
"Kakade",
"Sham",
""
]
] | In this paper we address the problem of learning and backtesting inventory control policies in the presence of general arrival dynamics -- which we term as a quantity-over-time arrivals model (QOT). We also allow for order quantities to be modified as a post-processing step to meet vendor constraints such as order minimum and batch size constraints -- a common practice in real supply chains. To the best of our knowledge this is the first work to handle either arbitrary arrival dynamics or an arbitrary downstream post-processing of order quantities. Building upon recent work (Madeka et al., 2022) we similarly formulate the periodic review inventory control problem as an exogenous decision process, where most of the state is outside the control of the agent. Madeka et al., 2022 show how to construct a simulator that replays historic data to solve this class of problem. In our case, we incorporate a deep generative model for the arrivals process as part of the history replay. By formulating the problem as an exogenous decision process, we can apply results from Madeka et al., 2022 to obtain a reduction to supervised learning. Via simulation studies we show that this approach yields statistically significant improvements in profitability over production baselines. Using data from a real-world A/B test, we show that Gen-QOT generalizes well to off-policy data and that the resulting buying policy outperforms traditional inventory management systems in real world settings. |
1707.05421 | Yeohee Im | Yeohee Im and Sergio Verd\'u | Optimal Universal Lossless Compression with Side Information | 20 Pages, Submitted to IEEE Transactions on Information Theory, Part
of this work was presented in ISIT '17 | null | null | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents conditional versions of Lempel-Ziv (LZ) algorithm for
settings where compressor and decompressor have access to the same side
information. We propose a fixed-length-parsing LZ algorithm with side
information, motivated by the Willems algorithm, and prove the optimality for
any stationary processes. In addition, we suggest strategies to improve the
algorithm which lower the data compression rate. A modification of a
variable-length-parsing LZ algorithm with side information is proposed and
proved to be asymptotically optimal for any stationary and ergodic processes.
| [
{
"created": "Tue, 18 Jul 2017 00:44:36 GMT",
"version": "v1"
}
] | 2017-07-19 | [
[
"Im",
"Yeohee",
""
],
[
"Verdú",
"Sergio",
""
]
] | This paper presents conditional versions of Lempel-Ziv (LZ) algorithm for settings where compressor and decompressor have access to the same side information. We propose a fixed-length-parsing LZ algorithm with side information, motivated by the Willems algorithm, and prove the optimality for any stationary processes. In addition, we suggest strategies to improve the algorithm which lower the data compression rate. A modification of a variable-length-parsing LZ algorithm with side information is proposed and proved to be asymptotically optimal for any stationary and ergodic processes. |
2403.10232 | Sajad Faramarzi | Sajad Faramarzi, Farzan Haddadi, Sajjad Amini, Masoud Ahookhosh | Matrix Completion via Nonsmooth Regularization of Fully Connected Neural
Networks | null | null | null | null | cs.IT cs.LG math.IT | http://creativecommons.org/licenses/by/4.0/ | Conventional matrix completion methods approximate the missing values by
assuming the matrix to be low-rank, which leads to a linear approximation of
missing values. It has been shown that enhanced performance could be attained
by using nonlinear estimators such as deep neural networks. Deep fully
connected neural networks (FCNNs), one of the most suitable architectures for
matrix completion, suffer from over-fitting due to their high capacity, which
leads to low generalizability. In this paper, we control over-fitting by
regularizing the FCNN model in terms of the $\ell_{1}$ norm of intermediate
representations and nuclear norm of weight matrices. As such, the resulting
regularized objective function becomes nonsmooth and nonconvex, i.e., existing
gradient-based methods cannot be applied to our model. We propose a variant of
the proximal gradient method and investigate its convergence to a critical
point. In the initial epochs of FCNN training, the regularization terms are
ignored, and through epochs, the effect of that increases. The gradual addition
of nonsmooth regularization terms is the main reason for the better performance
of the deep neural network with nonsmooth regularization terms (DNN-NSR)
algorithm. Our simulations indicate the superiority of the proposed algorithm
in comparison with existing linear and nonlinear algorithms.
| [
{
"created": "Fri, 15 Mar 2024 12:00:37 GMT",
"version": "v1"
}
] | 2024-03-18 | [
[
"Faramarzi",
"Sajad",
""
],
[
"Haddadi",
"Farzan",
""
],
[
"Amini",
"Sajjad",
""
],
[
"Ahookhosh",
"Masoud",
""
]
] | Conventional matrix completion methods approximate the missing values by assuming the matrix to be low-rank, which leads to a linear approximation of missing values. It has been shown that enhanced performance could be attained by using nonlinear estimators such as deep neural networks. Deep fully connected neural networks (FCNNs), one of the most suitable architectures for matrix completion, suffer from over-fitting due to their high capacity, which leads to low generalizability. In this paper, we control over-fitting by regularizing the FCNN model in terms of the $\ell_{1}$ norm of intermediate representations and nuclear norm of weight matrices. As such, the resulting regularized objective function becomes nonsmooth and nonconvex, i.e., existing gradient-based methods cannot be applied to our model. We propose a variant of the proximal gradient method and investigate its convergence to a critical point. In the initial epochs of FCNN training, the regularization terms are ignored, and through epochs, the effect of that increases. The gradual addition of nonsmooth regularization terms is the main reason for the better performance of the deep neural network with nonsmooth regularization terms (DNN-NSR) algorithm. Our simulations indicate the superiority of the proposed algorithm in comparison with existing linear and nonlinear algorithms. |
2303.06965 | Bo Qiang | Bo Qiang, Yiran Zhou, Yuheng Ding, Ningfeng Liu, Song Song, Liangren
Zhang, Bo Huang, Zhenming Liu | Bridging the Gap between Chemical Reaction Pretraining and Conditional
Molecule Generation with a Unified Model | null | null | 10.1038/s42256-023-00764-9 | null | cs.LG q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Chemical reactions are the fundamental building blocks of drug design and
organic chemistry research. In recent years, there has been a growing need for
a large-scale deep-learning framework that can efficiently capture the basic
rules of chemical reactions. In this paper, we have proposed a unified
framework that addresses both the reaction representation learning and molecule
generation tasks, which allows for a more holistic approach. Inspired by the
organic chemistry mechanism, we develop a novel pretraining framework that
enables us to incorporate inductive biases into the model. Our framework
achieves state-of-the-art results on challenging downstream tasks. By
possessing chemical knowledge, our generative framework overcome the
limitations of current molecule generation models that rely on a small number
of reaction templates. In the extensive experiments, our model generates
synthesizable drug-like structures of high quality. Overall, our work presents
a significant step toward a large-scale deep-learning framework for a variety
of reaction-based applications.
| [
{
"created": "Mon, 13 Mar 2023 10:06:41 GMT",
"version": "v1"
},
{
"created": "Tue, 14 Mar 2023 13:47:14 GMT",
"version": "v2"
},
{
"created": "Thu, 24 Aug 2023 08:33:05 GMT",
"version": "v3"
},
{
"created": "Mon, 26 Feb 2024 14:13:28 GMT",
"version": "v4"
},
{
"created": "Thu, 7 Mar 2024 14:51:12 GMT",
"version": "v5"
}
] | 2024-03-08 | [
[
"Qiang",
"Bo",
""
],
[
"Zhou",
"Yiran",
""
],
[
"Ding",
"Yuheng",
""
],
[
"Liu",
"Ningfeng",
""
],
[
"Song",
"Song",
""
],
[
"Zhang",
"Liangren",
""
],
[
"Huang",
"Bo",
""
],
[
"Liu",
"Zhenming",
""
]
] | Chemical reactions are the fundamental building blocks of drug design and organic chemistry research. In recent years, there has been a growing need for a large-scale deep-learning framework that can efficiently capture the basic rules of chemical reactions. In this paper, we have proposed a unified framework that addresses both the reaction representation learning and molecule generation tasks, which allows for a more holistic approach. Inspired by the organic chemistry mechanism, we develop a novel pretraining framework that enables us to incorporate inductive biases into the model. Our framework achieves state-of-the-art results on challenging downstream tasks. By possessing chemical knowledge, our generative framework overcome the limitations of current molecule generation models that rely on a small number of reaction templates. In the extensive experiments, our model generates synthesizable drug-like structures of high quality. Overall, our work presents a significant step toward a large-scale deep-learning framework for a variety of reaction-based applications. |
2311.13590 | Katarzyna Paluch | Katarzyna Paluch | Triangle-free 2-matchings | A new version with a simpler definition of an impassable hinge
(previously called intractable). Also, the existence of an augmenting
amenable path is not needed | null | null | null | cs.DS cs.DM | http://creativecommons.org/licenses/by/4.0/ | We consider the problem of finding a maximum size triangle-free 2-matching in
a graph $G$. A 2-matching is any subset of the edges such that each vertex is
incident to at most two edges from the subset. We present a fast combinatorial
algorithm for the problem. Our algorithm and its analysis are dramatically
simpler than the very complicated result by Hartvigsen from 1984. In the design
of this algorithm we use several new concepts. It has been proven before that
for any triangle-free 2-matching $M$ which is not maximum the graph contains an
$M$-augmenting path, whose application to $M$ results in a bigger triangle-free
2-matching. It was not known how to efficiently find such a path. A new
observation is that the search for an augmenting path $P$ can be restricted to
so-called {\em amenable} paths that go through any triangle $t$ contained in $P
\cup M$ a limited number of times. To find an augmenting path that is amenable
and hence whose application does not create any triangle we forbid some edges
to be followed by certain others. This operation can be thought of as using
gadgets, in which some pairs of edges get disconnected. To be able to
disconnect two edges we employ {\em half-edges}. A {\em half-edge} of edge $e$
is, informally speaking, a half of $e$ containing exactly one of its endpoints.
This is another novel application of half-edges which were previously used for
TSP and other matching problems. Additionally, gadgets are not fixed during any
augmentation phase, but are dynamically changing according to the currently
discovered state of reachability by amenable paths.
| [
{
"created": "Wed, 22 Nov 2023 18:52:51 GMT",
"version": "v1"
},
{
"created": "Tue, 16 Jan 2024 20:12:38 GMT",
"version": "v2"
},
{
"created": "Mon, 22 Jan 2024 20:14:13 GMT",
"version": "v3"
},
{
"created": "Thu, 1 Feb 2024 18:40:07 GMT",
"version": "v4"
},
{
"created": "Thu, 18 Apr 2024 17:46:27 GMT",
"version": "v5"
},
{
"created": "Fri, 7 Jun 2024 17:30:53 GMT",
"version": "v6"
},
{
"created": "Thu, 18 Jul 2024 17:27:34 GMT",
"version": "v7"
}
] | 2024-07-19 | [
[
"Paluch",
"Katarzyna",
""
]
] | We consider the problem of finding a maximum size triangle-free 2-matching in a graph $G$. A 2-matching is any subset of the edges such that each vertex is incident to at most two edges from the subset. We present a fast combinatorial algorithm for the problem. Our algorithm and its analysis are dramatically simpler than the very complicated result by Hartvigsen from 1984. In the design of this algorithm we use several new concepts. It has been proven before that for any triangle-free 2-matching $M$ which is not maximum the graph contains an $M$-augmenting path, whose application to $M$ results in a bigger triangle-free 2-matching. It was not known how to efficiently find such a path. A new observation is that the search for an augmenting path $P$ can be restricted to so-called {\em amenable} paths that go through any triangle $t$ contained in $P \cup M$ a limited number of times. To find an augmenting path that is amenable and hence whose application does not create any triangle we forbid some edges to be followed by certain others. This operation can be thought of as using gadgets, in which some pairs of edges get disconnected. To be able to disconnect two edges we employ {\em half-edges}. A {\em half-edge} of edge $e$ is, informally speaking, a half of $e$ containing exactly one of its endpoints. This is another novel application of half-edges which were previously used for TSP and other matching problems. Additionally, gadgets are not fixed during any augmentation phase, but are dynamically changing according to the currently discovered state of reachability by amenable paths. |
1601.08116 | Tiago Azevedo | Tiago Azevedo, Rosaldo J. F. Rossetti, Jorge G. Barbosa | Densifying the sparse cloud SimSaaS: The need of a synergy among
agent-directed simulation, SimSaaS and HLA | null | Proceedings of the 5th International Conference on Simulation and
Modeling Methodologies, Technologies and Applications (2015) 172-177 | 10.5220/0005542801720177 | null | cs.MA cs.DC | http://creativecommons.org/licenses/by/4.0/ | Modelling & Simulation (M&S) is broadly used in real scenarios where making
physical modifications could be highly expensive. With the so-called Simulation
Software-as-a-Service (SimSaaS), researchers could take advantage of the huge
amount of resource that cloud computing provides. Even so, studying and
analysing a problem through simulation may need several simulation tools, hence
raising interoperability issues. Having this in mind, IEEE developed a standard
for interoperability among simulators named High Level Architecture (HLA).
Moreover, the multi-agent system approach has become recognised as a convenient
approach for modelling and simulating complex systems. Despite all the recent
works and acceptance of these technologies, there is still a great lack of work
regarding synergies among them. This paper shows by means of a literature
review this lack of work or, in other words, the sparse Cloud SimSaaS. The
literature review and the resulting taxonomy are the main contributions of this
paper, as they provide a research agenda illustrating future research
opportunities and trends.
| [
{
"created": "Fri, 29 Jan 2016 14:11:37 GMT",
"version": "v1"
}
] | 2016-02-01 | [
[
"Azevedo",
"Tiago",
""
],
[
"Rossetti",
"Rosaldo J. F.",
""
],
[
"Barbosa",
"Jorge G.",
""
]
] | Modelling & Simulation (M&S) is broadly used in real scenarios where making physical modifications could be highly expensive. With the so-called Simulation Software-as-a-Service (SimSaaS), researchers could take advantage of the huge amount of resource that cloud computing provides. Even so, studying and analysing a problem through simulation may need several simulation tools, hence raising interoperability issues. Having this in mind, IEEE developed a standard for interoperability among simulators named High Level Architecture (HLA). Moreover, the multi-agent system approach has become recognised as a convenient approach for modelling and simulating complex systems. Despite all the recent works and acceptance of these technologies, there is still a great lack of work regarding synergies among them. This paper shows by means of a literature review this lack of work or, in other words, the sparse Cloud SimSaaS. The literature review and the resulting taxonomy are the main contributions of this paper, as they provide a research agenda illustrating future research opportunities and trends. |
1603.06236 | Nik Ruskuc | Tom Bourne and Nik Ruskuc | On the star-height of factor counting languages and their relationship
to Rees zero-matrix semigroups | null | null | null | null | cs.FL math.GR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Given a word $w$ over a finite alphabet, we consider, in three special cases,
the generalised star-height of the languages in which $w$ occurs as a
contiguous subword (factor) an exact number of times and of the languages in
which $w$ occurs as a contiguous subword modulo a fixed number, and prove that
in each case it is at most one. We use these combinatorial results to show that
any language recognised by a Rees (zero-)matrix semigroup over an abelian group
is of generalised star-height at most one.
| [
{
"created": "Sun, 20 Mar 2016 16:34:03 GMT",
"version": "v1"
},
{
"created": "Tue, 27 Sep 2016 17:01:02 GMT",
"version": "v2"
}
] | 2016-09-28 | [
[
"Bourne",
"Tom",
""
],
[
"Ruskuc",
"Nik",
""
]
] | Given a word $w$ over a finite alphabet, we consider, in three special cases, the generalised star-height of the languages in which $w$ occurs as a contiguous subword (factor) an exact number of times and of the languages in which $w$ occurs as a contiguous subword modulo a fixed number, and prove that in each case it is at most one. We use these combinatorial results to show that any language recognised by a Rees (zero-)matrix semigroup over an abelian group is of generalised star-height at most one. |
2111.01592 | Lu Zhang | Lu Zhang, Peiliang Li, Jing Chen and Shaojie Shen | Trajectory Prediction with Graph-based Dual-scale Context Fusion | Accepted by IEEE/RSJ International Conference on Intelligent Robots
and Systems (IROS) 2022. Code: https://github.com/HKUST-Aerial-Robotics/DSP | null | null | null | cs.RO cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Motion prediction for traffic participants is essential for a safe and robust
automated driving system, especially in cluttered urban environments. However,
it is highly challenging due to the complex road topology as well as the
uncertain intentions of the other agents. In this paper, we present a
graph-based trajectory prediction network named the Dual Scale Predictor (DSP),
which encodes both the static and dynamical driving context in a hierarchical
manner. Different from methods based on a rasterized map or sparse lane graph,
we consider the driving context as a graph with two layers, focusing on both
geometrical and topological features. Graph neural networks (GNNs) are applied
to extract features with different levels of granularity, and features are
subsequently aggregated with attention-based inter-layer networks, realizing
better local-global feature fusion. Following the recent goal-driven trajectory
prediction pipeline, goal candidates with high likelihood for the target agent
are extracted, and predicted trajectories are generated conditioned on these
goals. Thanks to the proposed dual-scale context fusion network, our DSP is
able to generate accurate and human-like multi-modal trajectories. We evaluate
the proposed method on the large-scale Argoverse motion forecasting benchmark,
and it achieves promising results, outperforming the recent state-of-the-art
methods.
| [
{
"created": "Tue, 2 Nov 2021 13:42:16 GMT",
"version": "v1"
},
{
"created": "Sat, 30 Jul 2022 15:59:18 GMT",
"version": "v2"
}
] | 2022-08-02 | [
[
"Zhang",
"Lu",
""
],
[
"Li",
"Peiliang",
""
],
[
"Chen",
"Jing",
""
],
[
"Shen",
"Shaojie",
""
]
] | Motion prediction for traffic participants is essential for a safe and robust automated driving system, especially in cluttered urban environments. However, it is highly challenging due to the complex road topology as well as the uncertain intentions of the other agents. In this paper, we present a graph-based trajectory prediction network named the Dual Scale Predictor (DSP), which encodes both the static and dynamical driving context in a hierarchical manner. Different from methods based on a rasterized map or sparse lane graph, we consider the driving context as a graph with two layers, focusing on both geometrical and topological features. Graph neural networks (GNNs) are applied to extract features with different levels of granularity, and features are subsequently aggregated with attention-based inter-layer networks, realizing better local-global feature fusion. Following the recent goal-driven trajectory prediction pipeline, goal candidates with high likelihood for the target agent are extracted, and predicted trajectories are generated conditioned on these goals. Thanks to the proposed dual-scale context fusion network, our DSP is able to generate accurate and human-like multi-modal trajectories. We evaluate the proposed method on the large-scale Argoverse motion forecasting benchmark, and it achieves promising results, outperforming the recent state-of-the-art methods. |
2402.13606 | Boyang Xue | Boyang Xue, Hongru Wang, Rui Wang, Sheng Wang, Zezhong Wang, Yiming
Du, Kam-Fai Wong | A Comprehensive Study of Multilingual Confidence Estimation on Large
Language Models | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | The tendency of Large Language Models (LLMs) to generate hallucinations and
exhibit overconfidence in predictions raises concerns regarding their
reliability. Confidence or uncertainty estimations indicating the extent of
trustworthiness of a model's response are essential to developing reliable AI
systems. Current research primarily focuses on LLM confidence estimations in
English, remaining a void for other widely used languages and impeding the
global development of reliable AI applications. This paper introduces a
comprehensive investigation of \textbf Multi\textbf{ling}ual
\textbf{Conf}idence estimation (\textsc{MlingConf}) on LLMs. First, we
introduce an elaborated and expert-checked multilingual QA dataset.
Subsequently, we delve into the performance of several confidence estimation
methods across diverse languages and examine how these confidence scores can
enhance LLM performance through self-refinement. Extensive experiments
conducted on the multilingual QA dataset demonstrate that confidence estimation
results vary in different languages, and the verbalized numerical confidence
estimation method exhibits the best performance among most languages over other
methods. Finally, the obtained confidence scores can consistently improve
performance as self-refinement feedback across various languages.
| [
{
"created": "Wed, 21 Feb 2024 08:20:06 GMT",
"version": "v1"
},
{
"created": "Sun, 16 Jun 2024 07:28:26 GMT",
"version": "v2"
}
] | 2024-06-18 | [
[
"Xue",
"Boyang",
""
],
[
"Wang",
"Hongru",
""
],
[
"Wang",
"Rui",
""
],
[
"Wang",
"Sheng",
""
],
[
"Wang",
"Zezhong",
""
],
[
"Du",
"Yiming",
""
],
[
"Wong",
"Kam-Fai",
""
]
] | The tendency of Large Language Models (LLMs) to generate hallucinations and exhibit overconfidence in predictions raises concerns regarding their reliability. Confidence or uncertainty estimations indicating the extent of trustworthiness of a model's response are essential to developing reliable AI systems. Current research primarily focuses on LLM confidence estimations in English, remaining a void for other widely used languages and impeding the global development of reliable AI applications. This paper introduces a comprehensive investigation of \textbf Multi\textbf{ling}ual \textbf{Conf}idence estimation (\textsc{MlingConf}) on LLMs. First, we introduce an elaborated and expert-checked multilingual QA dataset. Subsequently, we delve into the performance of several confidence estimation methods across diverse languages and examine how these confidence scores can enhance LLM performance through self-refinement. Extensive experiments conducted on the multilingual QA dataset demonstrate that confidence estimation results vary in different languages, and the verbalized numerical confidence estimation method exhibits the best performance among most languages over other methods. Finally, the obtained confidence scores can consistently improve performance as self-refinement feedback across various languages. |
2008.09697 | Long Chen | Long Chen, Zheheng Jiang, Lei Tong, Zhihua Liu, Aite Zhao, Qianni
Zhang, Junyu Dong, and Huiyu Zhou | Perceptual underwater image enhancement with deep learning and physical
priors | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Underwater image enhancement, as a pre-processing step to improve the
accuracy of the following object detection task, has drawn considerable
attention in the field of underwater navigation and ocean exploration. However,
most of the existing underwater image enhancement strategies tend to consider
enhancement and detection as two independent modules with no interaction, and
the practice of separate optimization does not always help the underwater
object detection task. In this paper, we propose two perceptual enhancement
models, each of which uses a deep enhancement model with a detection perceptor.
The detection perceptor provides coherent information in the form of gradients
to the enhancement model, guiding the enhancement model to generate patch level
visually pleasing images or detection favourable images. In addition, due to
the lack of training data, a hybrid underwater image synthesis model, which
fuses physical priors and data-driven cues, is proposed to synthesize training
data and generalise our enhancement model for real-world underwater images.
Experimental results show the superiority of our proposed method over several
state-of-the-art methods on both real-world and synthetic underwater datasets.
| [
{
"created": "Fri, 21 Aug 2020 22:11:34 GMT",
"version": "v1"
},
{
"created": "Sat, 26 Sep 2020 21:30:53 GMT",
"version": "v2"
}
] | 2020-09-29 | [
[
"Chen",
"Long",
""
],
[
"Jiang",
"Zheheng",
""
],
[
"Tong",
"Lei",
""
],
[
"Liu",
"Zhihua",
""
],
[
"Zhao",
"Aite",
""
],
[
"Zhang",
"Qianni",
""
],
[
"Dong",
"Junyu",
""
],
[
"Zhou",
"Huiyu",
""
]
] | Underwater image enhancement, as a pre-processing step to improve the accuracy of the following object detection task, has drawn considerable attention in the field of underwater navigation and ocean exploration. However, most of the existing underwater image enhancement strategies tend to consider enhancement and detection as two independent modules with no interaction, and the practice of separate optimization does not always help the underwater object detection task. In this paper, we propose two perceptual enhancement models, each of which uses a deep enhancement model with a detection perceptor. The detection perceptor provides coherent information in the form of gradients to the enhancement model, guiding the enhancement model to generate patch level visually pleasing images or detection favourable images. In addition, due to the lack of training data, a hybrid underwater image synthesis model, which fuses physical priors and data-driven cues, is proposed to synthesize training data and generalise our enhancement model for real-world underwater images. Experimental results show the superiority of our proposed method over several state-of-the-art methods on both real-world and synthetic underwater datasets. |
2010.16335 | Roberto Pacheco | Roberto G. Pacheco, Rodrigo S. Couto and Osvaldo Simeone | Calibration-Aided Edge Inference Offloading via Adaptive Model
Partitioning of Deep Neural Networks | to appear in Proc. IEEE International Conference on Communications
(ICC) 2021 | null | null | null | cs.LG cs.NI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Mobile devices can offload deep neural network (DNN)-based inference to the
cloud, overcoming local hardware and energy limitations. However, offloading
adds communication delay, thus increasing the overall inference time, and hence
it should be used only when needed. An approach to address this problem
consists of the use of adaptive model partitioning based on early-exit DNNs.
Accordingly, the inference starts at the mobile device, and an intermediate
layer estimates the accuracy: If the estimated accuracy is sufficient, the
device takes the inference decision; Otherwise, the remaining layers of the DNN
run at the cloud. Thus, the device offloads the inference to the cloud only if
it cannot classify a sample with high confidence. This offloading requires a
correct accuracy prediction at the device. Nevertheless, DNNs are typically
miscalibrated, providing overconfident decisions. This work shows that the
employment of a miscalibrated early-exit DNN for offloading via model
partitioning can significantly decrease inference accuracy. In contrast, we
argue that implementing a calibration algorithm prior to deployment can solve
this problem, allowing for more reliable offloading decisions.
| [
{
"created": "Fri, 30 Oct 2020 15:50:12 GMT",
"version": "v1"
},
{
"created": "Thu, 28 Jan 2021 14:05:38 GMT",
"version": "v2"
}
] | 2021-01-29 | [
[
"Pacheco",
"Roberto G.",
""
],
[
"Couto",
"Rodrigo S.",
""
],
[
"Simeone",
"Osvaldo",
""
]
] | Mobile devices can offload deep neural network (DNN)-based inference to the cloud, overcoming local hardware and energy limitations. However, offloading adds communication delay, thus increasing the overall inference time, and hence it should be used only when needed. An approach to address this problem consists of the use of adaptive model partitioning based on early-exit DNNs. Accordingly, the inference starts at the mobile device, and an intermediate layer estimates the accuracy: If the estimated accuracy is sufficient, the device takes the inference decision; Otherwise, the remaining layers of the DNN run at the cloud. Thus, the device offloads the inference to the cloud only if it cannot classify a sample with high confidence. This offloading requires a correct accuracy prediction at the device. Nevertheless, DNNs are typically miscalibrated, providing overconfident decisions. This work shows that the employment of a miscalibrated early-exit DNN for offloading via model partitioning can significantly decrease inference accuracy. In contrast, we argue that implementing a calibration algorithm prior to deployment can solve this problem, allowing for more reliable offloading decisions. |
2007.03377 | Ryan Parker | Paul Wright, Catherine White, Ryan C. Parker, Jean-S\'ebastien Pegon,
Marco Menchetti, Joseph Pearse, Arash Bahrami, Anastasia Moroz, Adrian
Wonfor, Richard V. Penty, Timothy P. Spiller and Andrew Lord | 5G Network Slicing with QKD and Quantum-Safe Security | 9 pages, 7 figures | J. Opt. Commun. Netw. 13, 33-40 (2021) | 10.1364/JOCN.413918 | null | cs.NI quant-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We demonstrate how the 5G network slicing model can be extended to address
data security requirements. In this work we demonstrate two different slice
configurations, with different encryption requirements, representing two
diverse use-cases for 5G networking: namely, an enterprise application hosted
at a metro network site, and a content delivery network. We create a modified
software-defined networking (SDN) orchestrator which calculates and provisions
network slices according to the requirements, including encryption backed by
quantum key distribution (QKD), or other methods. Slices are automatically
provisioned by SDN orchestration of network resources, allowing selection of
encrypted links as appropriate, including those which use standard
Diffie-Hellman key exchange, QKD and quantum-resistant algorithms (QRAs), as
well as no encryption at all. We show that the set-up and tear-down times of
the network slices takes of the order of 1-2 minutes, which is an order of
magnitude improvement over manually provisioning a link today.
| [
{
"created": "Tue, 7 Jul 2020 12:14:52 GMT",
"version": "v1"
},
{
"created": "Fri, 8 Jan 2021 09:11:33 GMT",
"version": "v2"
}
] | 2021-01-25 | [
[
"Wright",
"Paul",
""
],
[
"White",
"Catherine",
""
],
[
"Parker",
"Ryan C.",
""
],
[
"Pegon",
"Jean-Sébastien",
""
],
[
"Menchetti",
"Marco",
""
],
[
"Pearse",
"Joseph",
""
],
[
"Bahrami",
"Arash",
""
],
[
"Moroz",
"Anastasia",
""
],
[
"Wonfor",
"Adrian",
""
],
[
"Penty",
"Richard V.",
""
],
[
"Spiller",
"Timothy P.",
""
],
[
"Lord",
"Andrew",
""
]
] | We demonstrate how the 5G network slicing model can be extended to address data security requirements. In this work we demonstrate two different slice configurations, with different encryption requirements, representing two diverse use-cases for 5G networking: namely, an enterprise application hosted at a metro network site, and a content delivery network. We create a modified software-defined networking (SDN) orchestrator which calculates and provisions network slices according to the requirements, including encryption backed by quantum key distribution (QKD), or other methods. Slices are automatically provisioned by SDN orchestration of network resources, allowing selection of encrypted links as appropriate, including those which use standard Diffie-Hellman key exchange, QKD and quantum-resistant algorithms (QRAs), as well as no encryption at all. We show that the set-up and tear-down times of the network slices takes of the order of 1-2 minutes, which is an order of magnitude improvement over manually provisioning a link today. |
2007.03800 | Paul Irofti | Cristian Rusu and Paul Irofti | Efficient and Parallel Separable Dictionary Learning | null | null | null | null | cs.LG cs.NA eess.IV math.NA stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Separable, or Kronecker product, dictionaries provide natural decompositions
for 2D signals, such as images. In this paper, we describe a highly
parallelizable algorithm that learns such dictionaries which reaches sparse
representations competitive with the previous state of the art dictionary
learning algorithms from the literature but at a lower computational cost. We
highlight the performance of the proposed method to sparsely represent image
and hyperspectral data, and for image denoising.
| [
{
"created": "Tue, 7 Jul 2020 21:46:32 GMT",
"version": "v1"
},
{
"created": "Mon, 10 Aug 2020 20:15:43 GMT",
"version": "v2"
},
{
"created": "Thu, 17 Dec 2020 14:44:06 GMT",
"version": "v3"
},
{
"created": "Wed, 1 Dec 2021 22:50:00 GMT",
"version": "v4"
}
] | 2021-12-03 | [
[
"Rusu",
"Cristian",
""
],
[
"Irofti",
"Paul",
""
]
] | Separable, or Kronecker product, dictionaries provide natural decompositions for 2D signals, such as images. In this paper, we describe a highly parallelizable algorithm that learns such dictionaries which reaches sparse representations competitive with the previous state of the art dictionary learning algorithms from the literature but at a lower computational cost. We highlight the performance of the proposed method to sparsely represent image and hyperspectral data, and for image denoising. |
2305.16049 | Lantian Li Mr. | Lantian Li and Xiaolou Li and Haoyu Jiang and Chen Chen and Ruihai Hou
and Dong Wang | CN-Celeb-AV: A Multi-Genre Audio-Visual Dataset for Person Recognition | INTERSPEECH 2023 | null | null | null | cs.CV cs.MM cs.SD eess.AS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Audio-visual person recognition (AVPR) has received extensive attention.
However, most datasets used for AVPR research so far are collected in
constrained environments, and thus cannot reflect the true performance of AVPR
systems in real-world scenarios. To meet the request for research on AVPR in
unconstrained conditions, this paper presents a multi-genre AVPR dataset
collected `in the wild', named CN-Celeb-AV. This dataset contains more than
419k video segments from 1,136 persons from public media. In particular, we put
more emphasis on two real-world complexities: (1) data in multiple genres; (2)
segments with partial information. A comprehensive study was conducted to
compare CN-Celeb-AV with two popular public AVPR benchmark datasets, and the
results demonstrated that CN-Celeb-AV is more in line with real-world scenarios
and can be regarded as a new benchmark dataset for AVPR research. The dataset
also involves a development set that can be used to boost the performance of
AVPR systems in real-life situations. The dataset is free for researchers and
can be downloaded from http://cnceleb.org/.
| [
{
"created": "Thu, 25 May 2023 13:31:37 GMT",
"version": "v1"
},
{
"created": "Fri, 28 Jul 2023 15:13:23 GMT",
"version": "v2"
}
] | 2023-07-31 | [
[
"Li",
"Lantian",
""
],
[
"Li",
"Xiaolou",
""
],
[
"Jiang",
"Haoyu",
""
],
[
"Chen",
"Chen",
""
],
[
"Hou",
"Ruihai",
""
],
[
"Wang",
"Dong",
""
]
] | Audio-visual person recognition (AVPR) has received extensive attention. However, most datasets used for AVPR research so far are collected in constrained environments, and thus cannot reflect the true performance of AVPR systems in real-world scenarios. To meet the request for research on AVPR in unconstrained conditions, this paper presents a multi-genre AVPR dataset collected `in the wild', named CN-Celeb-AV. This dataset contains more than 419k video segments from 1,136 persons from public media. In particular, we put more emphasis on two real-world complexities: (1) data in multiple genres; (2) segments with partial information. A comprehensive study was conducted to compare CN-Celeb-AV with two popular public AVPR benchmark datasets, and the results demonstrated that CN-Celeb-AV is more in line with real-world scenarios and can be regarded as a new benchmark dataset for AVPR research. The dataset also involves a development set that can be used to boost the performance of AVPR systems in real-life situations. The dataset is free for researchers and can be downloaded from http://cnceleb.org/. |
2303.08536 | Yong Man Ro | Joanna Hong, Minsu Kim, Jeongsoo Choi, Yong Man Ro | Watch or Listen: Robust Audio-Visual Speech Recognition with Visual
Corruption Modeling and Reliability Scoring | Accepted at CVPR 2023. Implementation available:
https://github.com/joannahong/AV-RelScore | null | null | null | cs.MM cs.CV cs.LG cs.SD eess.AS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper deals with Audio-Visual Speech Recognition (AVSR) under multimodal
input corruption situations where audio inputs and visual inputs are both
corrupted, which is not well addressed in previous research directions.
Previous studies have focused on how to complement the corrupted audio inputs
with the clean visual inputs with the assumption of the availability of clean
visual inputs. However, in real life, clean visual inputs are not always
accessible and can even be corrupted by occluded lip regions or noises. Thus,
we firstly analyze that the previous AVSR models are not indeed robust to the
corruption of multimodal input streams, the audio and the visual inputs,
compared to uni-modal models. Then, we design multimodal input corruption
modeling to develop robust AVSR models. Lastly, we propose a novel AVSR
framework, namely Audio-Visual Reliability Scoring module (AV-RelScore), that
is robust to the corrupted multimodal inputs. The AV-RelScore can determine
which input modal stream is reliable or not for the prediction and also can
exploit the more reliable streams in prediction. The effectiveness of the
proposed method is evaluated with comprehensive experiments on popular
benchmark databases, LRS2 and LRS3. We also show that the reliability scores
obtained by AV-RelScore well reflect the degree of corruption and make the
proposed model focus on the reliable multimodal representations.
| [
{
"created": "Wed, 15 Mar 2023 11:29:36 GMT",
"version": "v1"
},
{
"created": "Mon, 20 Mar 2023 07:01:45 GMT",
"version": "v2"
}
] | 2023-03-21 | [
[
"Hong",
"Joanna",
""
],
[
"Kim",
"Minsu",
""
],
[
"Choi",
"Jeongsoo",
""
],
[
"Ro",
"Yong Man",
""
]
] | This paper deals with Audio-Visual Speech Recognition (AVSR) under multimodal input corruption situations where audio inputs and visual inputs are both corrupted, which is not well addressed in previous research directions. Previous studies have focused on how to complement the corrupted audio inputs with the clean visual inputs with the assumption of the availability of clean visual inputs. However, in real life, clean visual inputs are not always accessible and can even be corrupted by occluded lip regions or noises. Thus, we firstly analyze that the previous AVSR models are not indeed robust to the corruption of multimodal input streams, the audio and the visual inputs, compared to uni-modal models. Then, we design multimodal input corruption modeling to develop robust AVSR models. Lastly, we propose a novel AVSR framework, namely Audio-Visual Reliability Scoring module (AV-RelScore), that is robust to the corrupted multimodal inputs. The AV-RelScore can determine which input modal stream is reliable or not for the prediction and also can exploit the more reliable streams in prediction. The effectiveness of the proposed method is evaluated with comprehensive experiments on popular benchmark databases, LRS2 and LRS3. We also show that the reliability scores obtained by AV-RelScore well reflect the degree of corruption and make the proposed model focus on the reliable multimodal representations. |
1910.04071 | Annika Reinke | Lena Maier-Hein, Annika Reinke, Michal Kozubek, Anne L. Martel, Tal
Arbel, Matthias Eisenmann, Allan Hanbuary, Pierre Jannin, Henning M\"uller,
Sinan Onogur, Julio Saez-Rodriguez, Bram van Ginneken, Annette
Kopp-Schneider, Bennett Landman | BIAS: Transparent reporting of biomedical image analysis challenges | 2 Appendices - Appendix A: BIAS reporting guideline for biomedical
image analysis challenges, Appendix B: Glossary; 2 Supplements - Suppl 1:
Form for summarizing information on challenge organization, Suppl 2:
Structured description of a challenge design | null | null | null | cs.CV cs.CY eess.IV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The number of biomedical image analysis challenges organized per year is
steadily increasing. These international competitions have the purpose of
benchmarking algorithms on common data sets, typically to identify the best
method for a given problem. Recent research, however, revealed that common
practice related to challenge reporting does not allow for adequate
interpretation and reproducibility of results. To address the discrepancy
between the impact of challenges and the quality (control), the Biomedical I
mage Analysis ChallengeS (BIAS) initiative developed a set of recommendations
for the reporting of challenges. The BIAS statement aims to improve the
transparency of the reporting of a biomedical image analysis challenge
regardless of field of application, image modality or task category assessed.
This article describes how the BIAS statement was developed and presents a
checklist which authors of biomedical image analysis challenges are encouraged
to include in their submission when giving a paper on a challenge into review.
The purpose of the checklist is to standardize and facilitate the review
process and raise interpretability and reproducibility of challenge results by
making relevant information explicit.
| [
{
"created": "Wed, 9 Oct 2019 15:30:33 GMT",
"version": "v1"
},
{
"created": "Wed, 23 Oct 2019 06:27:03 GMT",
"version": "v2"
},
{
"created": "Mon, 25 Nov 2019 12:26:11 GMT",
"version": "v3"
},
{
"created": "Wed, 17 Jun 2020 06:58:41 GMT",
"version": "v4"
},
{
"created": "Mon, 31 Aug 2020 13:04:02 GMT",
"version": "v5"
}
] | 2020-09-01 | [
[
"Maier-Hein",
"Lena",
""
],
[
"Reinke",
"Annika",
""
],
[
"Kozubek",
"Michal",
""
],
[
"Martel",
"Anne L.",
""
],
[
"Arbel",
"Tal",
""
],
[
"Eisenmann",
"Matthias",
""
],
[
"Hanbuary",
"Allan",
""
],
[
"Jannin",
"Pierre",
""
],
[
"Müller",
"Henning",
""
],
[
"Onogur",
"Sinan",
""
],
[
"Saez-Rodriguez",
"Julio",
""
],
[
"van Ginneken",
"Bram",
""
],
[
"Kopp-Schneider",
"Annette",
""
],
[
"Landman",
"Bennett",
""
]
] | The number of biomedical image analysis challenges organized per year is steadily increasing. These international competitions have the purpose of benchmarking algorithms on common data sets, typically to identify the best method for a given problem. Recent research, however, revealed that common practice related to challenge reporting does not allow for adequate interpretation and reproducibility of results. To address the discrepancy between the impact of challenges and the quality (control), the Biomedical I mage Analysis ChallengeS (BIAS) initiative developed a set of recommendations for the reporting of challenges. The BIAS statement aims to improve the transparency of the reporting of a biomedical image analysis challenge regardless of field of application, image modality or task category assessed. This article describes how the BIAS statement was developed and presents a checklist which authors of biomedical image analysis challenges are encouraged to include in their submission when giving a paper on a challenge into review. The purpose of the checklist is to standardize and facilitate the review process and raise interpretability and reproducibility of challenge results by making relevant information explicit. |
1903.09106 | Diego Didona Dr | Diego Didona, Panagiota Fatourou, Rachid Guerraoui, Jingjing Wang,
Willy Zwaenepoel | Distributed Transactional Systems Cannot Be Fast | null | null | null | null | cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We prove that no fully transactional system can provide fast read
transactions (including read-only ones that are considered the most frequent in
practice). Specifically, to achieve fast read transactions, the system has to
give up support of transactions that write more than one object. We prove this
impossibility result for distributed storage systems that are causally
consistent, i.e., they do not require to ensure any strong form of consistency.
Therefore, our result holds also for any system that ensures a consistency
level stronger than causal consistency, e.g., strict serializability. The
impossibility result holds even for systems that store only two objects (and
support at least two servers and at least four clients). It also holds for
systems that are partially replicated. Our result justifies the design choices
of state-of-the-art distributed transactional systems and insists that system
designers should not put more effort to design fully-functional systems that
support both fast read transactions and ensure causal or any stronger form of
consistency.
| [
{
"created": "Thu, 21 Mar 2019 16:43:03 GMT",
"version": "v1"
},
{
"created": "Wed, 10 Apr 2019 05:59:22 GMT",
"version": "v2"
}
] | 2019-04-11 | [
[
"Didona",
"Diego",
""
],
[
"Fatourou",
"Panagiota",
""
],
[
"Guerraoui",
"Rachid",
""
],
[
"Wang",
"Jingjing",
""
],
[
"Zwaenepoel",
"Willy",
""
]
] | We prove that no fully transactional system can provide fast read transactions (including read-only ones that are considered the most frequent in practice). Specifically, to achieve fast read transactions, the system has to give up support of transactions that write more than one object. We prove this impossibility result for distributed storage systems that are causally consistent, i.e., they do not require to ensure any strong form of consistency. Therefore, our result holds also for any system that ensures a consistency level stronger than causal consistency, e.g., strict serializability. The impossibility result holds even for systems that store only two objects (and support at least two servers and at least four clients). It also holds for systems that are partially replicated. Our result justifies the design choices of state-of-the-art distributed transactional systems and insists that system designers should not put more effort to design fully-functional systems that support both fast read transactions and ensure causal or any stronger form of consistency. |
1705.10552 | Longquan Dai | Longquan Dai | Interpreting and Extending The Guided Filter Via Cyclic Coordinate
Descent | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we will disclose that the Guided Filter (GF) can be
interpreted as the Cyclic Coordinate Descent (CCD) solver of a Least Square
(LS) objective function. This discovery implies a possible way to extend GF
because we can alter the objective function of GF and define new filters as the
first pass iteration of the CCD solver of modified objective functions.
Moreover, referring to the iterative minimizing procedure of CCD, we can derive
new rolling filtering schemes. Hence, under the guidance of this discovery, we
not only propose new GF-like filters adapting to the specific requirements of
applications but also offer thoroughly explanations for two rolling filtering
schemes of GF as well as the way to extend them. Experiments show that our new
filters and extensions produce state-of-the-art results.
| [
{
"created": "Tue, 30 May 2017 11:29:23 GMT",
"version": "v1"
}
] | 2017-05-31 | [
[
"Dai",
"Longquan",
""
]
] | In this paper, we will disclose that the Guided Filter (GF) can be interpreted as the Cyclic Coordinate Descent (CCD) solver of a Least Square (LS) objective function. This discovery implies a possible way to extend GF because we can alter the objective function of GF and define new filters as the first pass iteration of the CCD solver of modified objective functions. Moreover, referring to the iterative minimizing procedure of CCD, we can derive new rolling filtering schemes. Hence, under the guidance of this discovery, we not only propose new GF-like filters adapting to the specific requirements of applications but also offer thoroughly explanations for two rolling filtering schemes of GF as well as the way to extend them. Experiments show that our new filters and extensions produce state-of-the-art results. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.