id
stringlengths 9
10
| submitter
stringlengths 1
64
⌀ | authors
stringlengths 4
20.7k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
404
⌀ | doi
stringlengths 11
153
⌀ | report-no
stringlengths 2
254
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | orig_abstract
stringlengths 14
3.35k
| versions
listlengths 1
60
| update_date
stringlengths 10
10
| authors_parsed
listlengths 1
1.35k
| abstract
stringlengths 11
3.34k
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2310.10047
|
Yixin Liu
|
Yixin Liu, Avi Singh, C. Daniel Freeman, John D. Co-Reyes, Peter J.
Liu
|
Improving Large Language Model Fine-tuning for Solving Math Problems
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Despite their success in many natural language tasks, solving math problems
remains a significant challenge for large language models (LLMs). A large gap
exists between LLMs' pass-at-one and pass-at-N performance in solving math
problems, suggesting LLMs might be close to finding correct solutions,
motivating our exploration of fine-tuning methods to unlock LLMs' performance.
Using the challenging MATH dataset, we investigate three fine-tuning
strategies: (1) solution fine-tuning, where we fine-tune to generate a detailed
solution for a given math problem; (2) solution-cluster re-ranking, where the
LLM is fine-tuned as a solution verifier/evaluator to choose among generated
candidate solution clusters; (3) multi-task sequential fine-tuning, which
integrates both solution generation and evaluation tasks together efficiently
to enhance the LLM performance. With these methods, we present a thorough
empirical study on a series of PaLM 2 models and find: (1) The quality and
style of the step-by-step solutions used for fine-tuning can make a significant
impact on the model performance; (2) While solution re-ranking and majority
voting are both effective for improving the model performance when used
separately, they can also be used together for an even greater performance
boost; (3) Multi-task fine-tuning that sequentially separates the solution
generation and evaluation tasks can offer improved performance compared with
the solution fine-tuning baseline. Guided by these insights, we design a
fine-tuning recipe that yields approximately 58.8% accuracy on the MATH dataset
with fine-tuned PaLM 2-L models, an 11.2% accuracy improvement over the
few-shot performance of pre-trained PaLM 2-L model with majority voting.
|
[
{
"created": "Mon, 16 Oct 2023 04:11:19 GMT",
"version": "v1"
}
] |
2023-10-17
|
[
[
"Liu",
"Yixin",
""
],
[
"Singh",
"Avi",
""
],
[
"Freeman",
"C. Daniel",
""
],
[
"Co-Reyes",
"John D.",
""
],
[
"Liu",
"Peter J.",
""
]
] |
Despite their success in many natural language tasks, solving math problems remains a significant challenge for large language models (LLMs). A large gap exists between LLMs' pass-at-one and pass-at-N performance in solving math problems, suggesting LLMs might be close to finding correct solutions, motivating our exploration of fine-tuning methods to unlock LLMs' performance. Using the challenging MATH dataset, we investigate three fine-tuning strategies: (1) solution fine-tuning, where we fine-tune to generate a detailed solution for a given math problem; (2) solution-cluster re-ranking, where the LLM is fine-tuned as a solution verifier/evaluator to choose among generated candidate solution clusters; (3) multi-task sequential fine-tuning, which integrates both solution generation and evaluation tasks together efficiently to enhance the LLM performance. With these methods, we present a thorough empirical study on a series of PaLM 2 models and find: (1) The quality and style of the step-by-step solutions used for fine-tuning can make a significant impact on the model performance; (2) While solution re-ranking and majority voting are both effective for improving the model performance when used separately, they can also be used together for an even greater performance boost; (3) Multi-task fine-tuning that sequentially separates the solution generation and evaluation tasks can offer improved performance compared with the solution fine-tuning baseline. Guided by these insights, we design a fine-tuning recipe that yields approximately 58.8% accuracy on the MATH dataset with fine-tuned PaLM 2-L models, an 11.2% accuracy improvement over the few-shot performance of pre-trained PaLM 2-L model with majority voting.
|
2307.14632
|
G\"ozde G\"ul \c{S}ahin
|
Subha Vadlamannati, G\"ozde G\"ul \c{S}ahin
|
Metric-Based In-context Learning: A Case Study in Text Simplification
|
Accepted to INLG
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
In-context learning (ICL) for large language models has proven to be a
powerful approach for many natural language processing tasks. However,
determining the best method to select examples for ICL is nontrivial as the
results can vary greatly depending on the quality, quantity, and order of
examples used. In this paper, we conduct a case study on text simplification
(TS) to investigate how to select the best and most robust examples for ICL. We
propose Metric-Based in-context Learning (MBL) method that utilizes commonly
used TS metrics such as SARI, compression ratio, and BERT-Precision for
selection. Through an extensive set of experiments with various-sized GPT
models on standard TS benchmarks such as TurkCorpus and ASSET, we show that
examples selected by the top SARI scores perform the best on larger models such
as GPT-175B, while the compression ratio generally performs better on smaller
models such as GPT-13B and GPT-6.7B. Furthermore, we demonstrate that MBL is
generally robust to example orderings and out-of-domain test sets, and
outperforms strong baselines and state-of-the-art finetuned language models.
Finally, we show that the behaviour of large GPT models can be implicitly
controlled by the chosen metric. Our research provides a new framework for
selecting examples in ICL, and demonstrates its effectiveness in text
simplification tasks, breaking new ground for more accurate and efficient NLG
systems.
|
[
{
"created": "Thu, 27 Jul 2023 05:45:35 GMT",
"version": "v1"
}
] |
2023-07-28
|
[
[
"Vadlamannati",
"Subha",
""
],
[
"Şahin",
"Gözde Gül",
""
]
] |
In-context learning (ICL) for large language models has proven to be a powerful approach for many natural language processing tasks. However, determining the best method to select examples for ICL is nontrivial as the results can vary greatly depending on the quality, quantity, and order of examples used. In this paper, we conduct a case study on text simplification (TS) to investigate how to select the best and most robust examples for ICL. We propose Metric-Based in-context Learning (MBL) method that utilizes commonly used TS metrics such as SARI, compression ratio, and BERT-Precision for selection. Through an extensive set of experiments with various-sized GPT models on standard TS benchmarks such as TurkCorpus and ASSET, we show that examples selected by the top SARI scores perform the best on larger models such as GPT-175B, while the compression ratio generally performs better on smaller models such as GPT-13B and GPT-6.7B. Furthermore, we demonstrate that MBL is generally robust to example orderings and out-of-domain test sets, and outperforms strong baselines and state-of-the-art finetuned language models. Finally, we show that the behaviour of large GPT models can be implicitly controlled by the chosen metric. Our research provides a new framework for selecting examples in ICL, and demonstrates its effectiveness in text simplification tasks, breaking new ground for more accurate and efficient NLG systems.
|
1312.3372
|
Giorgi Japaridze
|
Giorgi Japaridze
|
On resources and tasks
| null | null | null | null |
cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Essentially being an extended abstract of the author's 1998 PhD thesis, this
paper introduces an extension of the language of linear logic with a semantics
which treats sentences as tasks rather than true/false statements. A resource
is understood as an agent capable of accomplishing the task expressed by such a
sentence. It is argued that the corresponding logic can be used as a planning
logic, whose advantage over the traditional comprehensive planning logics is
that it avoids the representationalframe problem and significantly alleviates
the inferential frame problem.
|
[
{
"created": "Wed, 11 Dec 2013 23:39:01 GMT",
"version": "v1"
}
] |
2013-12-13
|
[
[
"Japaridze",
"Giorgi",
""
]
] |
Essentially being an extended abstract of the author's 1998 PhD thesis, this paper introduces an extension of the language of linear logic with a semantics which treats sentences as tasks rather than true/false statements. A resource is understood as an agent capable of accomplishing the task expressed by such a sentence. It is argued that the corresponding logic can be used as a planning logic, whose advantage over the traditional comprehensive planning logics is that it avoids the representationalframe problem and significantly alleviates the inferential frame problem.
|
1905.07350
|
Edvinas Byla
|
Edvinas Byla and Wei Pang
|
DeepSwarm: Optimising Convolutional Neural Networks using Swarm
Intelligence
|
13 pages, 6 figures, to access DeepSwarm code go to
https://github.com/Pattio/DeepSwarm
| null | null | null |
cs.LG cs.NE stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper we propose DeepSwarm, a novel neural architecture search (NAS)
method based on Swarm Intelligence principles. At its core DeepSwarm uses Ant
Colony Optimization (ACO) to generate ant population which uses the pheromone
information to collectively search for the best neural architecture.
Furthermore, by using local and global pheromone update rules our method
ensures the balance between exploitation and exploration. On top of this, to
make our method more efficient we combine progressive neural architecture
search with weight reusability. Furthermore, due to the nature of ACO our
method can incorporate heuristic information which can further speed up the
search process. After systematic and extensive evaluation, we discover that on
three different datasets (MNIST, Fashion-MNIST, and CIFAR-10) when compared to
existing systems our proposed method demonstrates competitive performance.
Finally, we open source DeepSwarm as a NAS library and hope it can be used by
more deep learning researchers and practitioners.
|
[
{
"created": "Fri, 17 May 2019 16:13:38 GMT",
"version": "v1"
}
] |
2019-05-20
|
[
[
"Byla",
"Edvinas",
""
],
[
"Pang",
"Wei",
""
]
] |
In this paper we propose DeepSwarm, a novel neural architecture search (NAS) method based on Swarm Intelligence principles. At its core DeepSwarm uses Ant Colony Optimization (ACO) to generate ant population which uses the pheromone information to collectively search for the best neural architecture. Furthermore, by using local and global pheromone update rules our method ensures the balance between exploitation and exploration. On top of this, to make our method more efficient we combine progressive neural architecture search with weight reusability. Furthermore, due to the nature of ACO our method can incorporate heuristic information which can further speed up the search process. After systematic and extensive evaluation, we discover that on three different datasets (MNIST, Fashion-MNIST, and CIFAR-10) when compared to existing systems our proposed method demonstrates competitive performance. Finally, we open source DeepSwarm as a NAS library and hope it can be used by more deep learning researchers and practitioners.
|
2306.15927
|
Arash Hajisafi
|
Arash Hajisafi, Haowen Lin, Sina Shaham, Haoji Hu, Maria Despoina
Siampou, Yao-Yi Chiang, Cyrus Shahabi
|
Learning Dynamic Graphs from All Contextual Information for Accurate
Point-of-Interest Visit Forecasting
| null | null |
10.1145/3589132.3625567
| null |
cs.LG cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Forecasting the number of visits to Points-of-Interest (POI) in an urban area
is critical for planning and decision-making for various application domains,
from urban planning and transportation management to public health and social
studies. Although this forecasting problem can be formulated as a multivariate
time-series forecasting task, the current approaches cannot fully exploit the
ever-changing multi-context correlations among POIs. Therefore, we propose
Busyness Graph Neural Network (BysGNN), a temporal graph neural network
designed to learn and uncover the underlying multi-context correlations between
POIs for accurate visit forecasting. Unlike other approaches where only
time-series data is used to learn a dynamic graph, BysGNN utilizes all
contextual information and time-series data to learn an accurate dynamic graph
representation. By incorporating all contextual, temporal, and spatial signals,
we observe a significant improvement in our forecasting accuracy over
state-of-the-art forecasting models in our experiments with real-world datasets
across the United States.
|
[
{
"created": "Wed, 28 Jun 2023 05:14:03 GMT",
"version": "v1"
},
{
"created": "Fri, 29 Sep 2023 02:02:28 GMT",
"version": "v2"
}
] |
2023-10-02
|
[
[
"Hajisafi",
"Arash",
""
],
[
"Lin",
"Haowen",
""
],
[
"Shaham",
"Sina",
""
],
[
"Hu",
"Haoji",
""
],
[
"Siampou",
"Maria Despoina",
""
],
[
"Chiang",
"Yao-Yi",
""
],
[
"Shahabi",
"Cyrus",
""
]
] |
Forecasting the number of visits to Points-of-Interest (POI) in an urban area is critical for planning and decision-making for various application domains, from urban planning and transportation management to public health and social studies. Although this forecasting problem can be formulated as a multivariate time-series forecasting task, the current approaches cannot fully exploit the ever-changing multi-context correlations among POIs. Therefore, we propose Busyness Graph Neural Network (BysGNN), a temporal graph neural network designed to learn and uncover the underlying multi-context correlations between POIs for accurate visit forecasting. Unlike other approaches where only time-series data is used to learn a dynamic graph, BysGNN utilizes all contextual information and time-series data to learn an accurate dynamic graph representation. By incorporating all contextual, temporal, and spatial signals, we observe a significant improvement in our forecasting accuracy over state-of-the-art forecasting models in our experiments with real-world datasets across the United States.
|
2312.03692
|
Ali Naseh
|
Ali Naseh, Jaechul Roh, Amir Houmansadr
|
Memory Triggers: Unveiling Memorization in Text-To-Image Generative
Models through Word-Level Duplication
| null | null | null | null |
cs.CR cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Diffusion-based models, such as the Stable Diffusion model, have
revolutionized text-to-image synthesis with their ability to produce
high-quality, high-resolution images. These advancements have prompted
significant progress in image generation and editing tasks. However, these
models also raise concerns due to their tendency to memorize and potentially
replicate exact training samples, posing privacy risks and enabling adversarial
attacks. Duplication in training datasets is recognized as a major factor
contributing to memorization, and various forms of memorization have been
studied so far. This paper focuses on two distinct and underexplored types of
duplication that lead to replication during inference in diffusion-based
models, particularly in the Stable Diffusion model. We delve into these
lesser-studied duplication phenomena and their implications through two case
studies, aiming to contribute to the safer and more responsible use of
generative models in various applications.
|
[
{
"created": "Wed, 6 Dec 2023 18:54:44 GMT",
"version": "v1"
}
] |
2023-12-07
|
[
[
"Naseh",
"Ali",
""
],
[
"Roh",
"Jaechul",
""
],
[
"Houmansadr",
"Amir",
""
]
] |
Diffusion-based models, such as the Stable Diffusion model, have revolutionized text-to-image synthesis with their ability to produce high-quality, high-resolution images. These advancements have prompted significant progress in image generation and editing tasks. However, these models also raise concerns due to their tendency to memorize and potentially replicate exact training samples, posing privacy risks and enabling adversarial attacks. Duplication in training datasets is recognized as a major factor contributing to memorization, and various forms of memorization have been studied so far. This paper focuses on two distinct and underexplored types of duplication that lead to replication during inference in diffusion-based models, particularly in the Stable Diffusion model. We delve into these lesser-studied duplication phenomena and their implications through two case studies, aiming to contribute to the safer and more responsible use of generative models in various applications.
|
2012.13620
|
Sagar Gubbi
|
Sagar Gubbi Venkatesh and Raviteja Upadrashta and Shishir Kolathaya
and Bharadwaj Amrutur
|
Teaching Robots Novel Objects by Pointing at Them
| null | null |
10.1109/RO-MAN47096.2020.9223596
| null |
cs.RO cs.CV cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Robots that must operate in novel environments and collaborate with humans
must be capable of acquiring new knowledge from human experts during operation.
We propose teaching a robot novel objects it has not encountered before by
pointing a hand at the new object of interest. An end-to-end neural network is
used to attend to the novel object of interest indicated by the pointing hand
and then to localize the object in new scenes. In order to attend to the novel
object indicated by the pointing hand, we propose a spatial attention
modulation mechanism that learns to focus on the highlighted object while
ignoring the other objects in the scene. We show that a robot arm can
manipulate novel objects that are highlighted by pointing a hand at them. We
also evaluate the performance of the proposed architecture on a synthetic
dataset constructed using emojis and on a real-world dataset of common objects.
|
[
{
"created": "Fri, 25 Dec 2020 20:01:25 GMT",
"version": "v1"
}
] |
2020-12-29
|
[
[
"Venkatesh",
"Sagar Gubbi",
""
],
[
"Upadrashta",
"Raviteja",
""
],
[
"Kolathaya",
"Shishir",
""
],
[
"Amrutur",
"Bharadwaj",
""
]
] |
Robots that must operate in novel environments and collaborate with humans must be capable of acquiring new knowledge from human experts during operation. We propose teaching a robot novel objects it has not encountered before by pointing a hand at the new object of interest. An end-to-end neural network is used to attend to the novel object of interest indicated by the pointing hand and then to localize the object in new scenes. In order to attend to the novel object indicated by the pointing hand, we propose a spatial attention modulation mechanism that learns to focus on the highlighted object while ignoring the other objects in the scene. We show that a robot arm can manipulate novel objects that are highlighted by pointing a hand at them. We also evaluate the performance of the proposed architecture on a synthetic dataset constructed using emojis and on a real-world dataset of common objects.
|
cs/0412118
|
Chiranjeeb Buragohain
|
Chiranjeeb Buragohain, Divyakant Agrawal, Subhash Suri
|
Power Aware Routing for Sensor Databases
| null |
Proceedings of IEEE INFOCOM 2005, March 13-17, 2005 Miami
|
10.1109/INFCOM.2005.1498455
| null |
cs.NI cs.DC
| null |
Wireless sensor networks offer the potential to span and monitor large
geographical areas inexpensively. Sensor network databases like TinyDB are the
dominant architectures to extract and manage data in such networks. Since
sensors have significant power constraints (battery life), and high
communication costs, design of energy efficient communication algorithms is of
great importance. The data flow in a sensor database is very different from
data flow in an ordinary network and poses novel challenges in designing
efficient routing algorithms. In this work we explore the problem of energy
efficient routing for various different types of database queries and show that
in general, this problem is NP-complete. We give a constant factor
approximation algorithm for one class of query, and for other queries give
heuristic algorithms. We evaluate the efficiency of the proposed algorithms by
simulation and demonstrate their near optimal performance for various network
sizes.
|
[
{
"created": "Thu, 30 Dec 2004 02:02:35 GMT",
"version": "v1"
}
] |
2016-11-17
|
[
[
"Buragohain",
"Chiranjeeb",
""
],
[
"Agrawal",
"Divyakant",
""
],
[
"Suri",
"Subhash",
""
]
] |
Wireless sensor networks offer the potential to span and monitor large geographical areas inexpensively. Sensor network databases like TinyDB are the dominant architectures to extract and manage data in such networks. Since sensors have significant power constraints (battery life), and high communication costs, design of energy efficient communication algorithms is of great importance. The data flow in a sensor database is very different from data flow in an ordinary network and poses novel challenges in designing efficient routing algorithms. In this work we explore the problem of energy efficient routing for various different types of database queries and show that in general, this problem is NP-complete. We give a constant factor approximation algorithm for one class of query, and for other queries give heuristic algorithms. We evaluate the efficiency of the proposed algorithms by simulation and demonstrate their near optimal performance for various network sizes.
|
1102.3390
|
Mayur Punekar
|
Mayur Punekar and Mark F. Flanagan
|
Trellis-Based Check Node Processing for Low-Complexity Nonbinary LP
Decoding
|
Submitted to 2011 IEEE International Symposium on Information Theory
(ISIT 2011)
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Linear Programming (LP) decoding is emerging as an attractive alternative to
decode Low-Density Parity-Check (LDPC) codes. However, the earliest LP decoders
proposed for binary and nonbinary LDPC codes are not suitable for use at
moderate and large code lengths. To overcome this problem, Vontobel et al.
developed an iterative Low-Complexity LP (LCLP) decoding algorithm for binary
LDPC codes. The variable and check node calculations of binary LCLP decoding
algorithm are related to those of binary Belief Propagation (BP). The present
authors generalized this work to derive an iterative LCLP decoding algorithm
for nonbinary linear codes. Contrary to binary LCLP, the variable and check
node calculations of this algorithm are in general different from that of
nonbinary BP. The overall complexity of nonbinary LCLP decoding is linear in
block length; however the complexity of its check node calculations is
exponential in the check node degree. In this paper, we propose a modified BCJR
algorithm for efficient check node processing in the nonbinary LCLP decoding
algorithm. The proposed algorithm has complexity linear in the check node
degree. We also introduce an alternative state metric to improve the run time
of the proposed algorithm. Simulation results are presented for $(504, 252)$
and $(1008, 504)$ nonbinary LDPC codes over $\mathbb{Z}_4$.
|
[
{
"created": "Wed, 16 Feb 2011 18:13:38 GMT",
"version": "v1"
}
] |
2011-02-17
|
[
[
"Punekar",
"Mayur",
""
],
[
"Flanagan",
"Mark F.",
""
]
] |
Linear Programming (LP) decoding is emerging as an attractive alternative to decode Low-Density Parity-Check (LDPC) codes. However, the earliest LP decoders proposed for binary and nonbinary LDPC codes are not suitable for use at moderate and large code lengths. To overcome this problem, Vontobel et al. developed an iterative Low-Complexity LP (LCLP) decoding algorithm for binary LDPC codes. The variable and check node calculations of binary LCLP decoding algorithm are related to those of binary Belief Propagation (BP). The present authors generalized this work to derive an iterative LCLP decoding algorithm for nonbinary linear codes. Contrary to binary LCLP, the variable and check node calculations of this algorithm are in general different from that of nonbinary BP. The overall complexity of nonbinary LCLP decoding is linear in block length; however the complexity of its check node calculations is exponential in the check node degree. In this paper, we propose a modified BCJR algorithm for efficient check node processing in the nonbinary LCLP decoding algorithm. The proposed algorithm has complexity linear in the check node degree. We also introduce an alternative state metric to improve the run time of the proposed algorithm. Simulation results are presented for $(504, 252)$ and $(1008, 504)$ nonbinary LDPC codes over $\mathbb{Z}_4$.
|
2008.00363
|
Satyananda Kashyap
|
Satyananda Kashyap, Alexandros Karargyris, Joy Wu, Yaniv Gur, Arjun
Sharma, Ken C. L. Wong, Mehdi Moradi, Tanveer Syeda-Mahmood
|
Looking in the Right place for Anomalies: Explainable AI through
Automatic Location Learning
|
5 pages, Paper presented as a poster at the International Symposium
on Biomedical Imaging, 2020, Paper Number 655
|
2020 IEEE 17th International Symposium on Biomedical Imaging
(ISBI)
|
10.1109/ISBI45749.2020.9098370
| null |
cs.CV cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Deep learning has now become the de facto approach to the recognition of
anomalies in medical imaging. Their 'black box' way of classifying medical
images into anomaly labels poses problems for their acceptance, particularly
with clinicians. Current explainable AI methods offer justifications through
visualizations such as heat maps but cannot guarantee that the network is
focusing on the relevant image region fully containing the anomaly. In this
paper, we develop an approach to explainable AI in which the anomaly is assured
to be overlapping the expected location when present. This is made possible by
automatically extracting location-specific labels from textual reports and
learning the association of expected locations to labels using a hybrid
combination of Bi-Directional Long Short-Term Memory Recurrent Neural Networks
(Bi-LSTM) and DenseNet-121. Use of this expected location to bias the
subsequent attention-guided inference network based on ResNet101 results in the
isolation of the anomaly at the expected location when present. The method is
evaluated on a large chest X-ray dataset.
|
[
{
"created": "Sun, 2 Aug 2020 00:02:37 GMT",
"version": "v1"
}
] |
2020-08-04
|
[
[
"Kashyap",
"Satyananda",
""
],
[
"Karargyris",
"Alexandros",
""
],
[
"Wu",
"Joy",
""
],
[
"Gur",
"Yaniv",
""
],
[
"Sharma",
"Arjun",
""
],
[
"Wong",
"Ken C. L.",
""
],
[
"Moradi",
"Mehdi",
""
],
[
"Syeda-Mahmood",
"Tanveer",
""
]
] |
Deep learning has now become the de facto approach to the recognition of anomalies in medical imaging. Their 'black box' way of classifying medical images into anomaly labels poses problems for their acceptance, particularly with clinicians. Current explainable AI methods offer justifications through visualizations such as heat maps but cannot guarantee that the network is focusing on the relevant image region fully containing the anomaly. In this paper, we develop an approach to explainable AI in which the anomaly is assured to be overlapping the expected location when present. This is made possible by automatically extracting location-specific labels from textual reports and learning the association of expected locations to labels using a hybrid combination of Bi-Directional Long Short-Term Memory Recurrent Neural Networks (Bi-LSTM) and DenseNet-121. Use of this expected location to bias the subsequent attention-guided inference network based on ResNet101 results in the isolation of the anomaly at the expected location when present. The method is evaluated on a large chest X-ray dataset.
|
2001.09273
|
Wei Zhang
|
Quan Yu, Jing Ren, Jiyan Zhang, Siyang Liu, Yinjin Fu, Ying Li, Linru
Ma, Jian Jing, and Wei Zhang
|
An Immunology-Inspired Network Security Architecture
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The coming 5G networks have been enabling the creation of a wide variety of
new services and applications which demand a new network security architecture.
Immunology is the study of the immune system in vertebrates (including humans)
which protects us from infection through various lines of defence. By studying
the resemblance between the immune system and network security system, we
acquire some inspirations from immunology and distill some guidelines for the
design of network security architecture. We present a philosophical design
principle, that is maintaining the balance between security and availability.
Then, we derive two methodological principles: 1) achieving situation-awareness
and fast response through community cooperation among heterogeneous nodes, and
2) Enhancing defense capability through consistently contesting with invaders
in a real environment and actively mutating/evolving attack strategies. We also
present a reference architecture designed based on the principles.
|
[
{
"created": "Sat, 25 Jan 2020 07:13:24 GMT",
"version": "v1"
}
] |
2020-01-28
|
[
[
"Yu",
"Quan",
""
],
[
"Ren",
"Jing",
""
],
[
"Zhang",
"Jiyan",
""
],
[
"Liu",
"Siyang",
""
],
[
"Fu",
"Yinjin",
""
],
[
"Li",
"Ying",
""
],
[
"Ma",
"Linru",
""
],
[
"Jing",
"Jian",
""
],
[
"Zhang",
"Wei",
""
]
] |
The coming 5G networks have been enabling the creation of a wide variety of new services and applications which demand a new network security architecture. Immunology is the study of the immune system in vertebrates (including humans) which protects us from infection through various lines of defence. By studying the resemblance between the immune system and network security system, we acquire some inspirations from immunology and distill some guidelines for the design of network security architecture. We present a philosophical design principle, that is maintaining the balance between security and availability. Then, we derive two methodological principles: 1) achieving situation-awareness and fast response through community cooperation among heterogeneous nodes, and 2) Enhancing defense capability through consistently contesting with invaders in a real environment and actively mutating/evolving attack strategies. We also present a reference architecture designed based on the principles.
|
2402.05375
|
Senmao Li
|
Senmao Li, Joost van de Weijer, Taihang Hu, Fahad Shahbaz Khan, Qibin
Hou, Yaxing Wang, Jian Yang
|
Get What You Want, Not What You Don't: Image Content Suppression for
Text-to-Image Diffusion Models
|
ICLR 2024. Our code is available in
https://github.com/sen-mao/SuppressEOT
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The success of recent text-to-image diffusion models is largely due to their
capacity to be guided by a complex text prompt, which enables users to
precisely describe the desired content. However, these models struggle to
effectively suppress the generation of undesired content, which is explicitly
requested to be omitted from the generated image in the prompt. In this paper,
we analyze how to manipulate the text embeddings and remove unwanted content
from them. We introduce two contributions, which we refer to as
$\textit{soft-weighted regularization}$ and $\textit{inference-time text
embedding optimization}$. The first regularizes the text embedding matrix and
effectively suppresses the undesired content. The second method aims to further
suppress the unwanted content generation of the prompt, and encourages the
generation of desired content. We evaluate our method quantitatively and
qualitatively on extensive experiments, validating its effectiveness.
Furthermore, our method is generalizability to both the pixel-space diffusion
models (i.e. DeepFloyd-IF) and the latent-space diffusion models (i.e. Stable
Diffusion).
|
[
{
"created": "Thu, 8 Feb 2024 03:15:06 GMT",
"version": "v1"
}
] |
2024-02-09
|
[
[
"Li",
"Senmao",
""
],
[
"van de Weijer",
"Joost",
""
],
[
"Hu",
"Taihang",
""
],
[
"Khan",
"Fahad Shahbaz",
""
],
[
"Hou",
"Qibin",
""
],
[
"Wang",
"Yaxing",
""
],
[
"Yang",
"Jian",
""
]
] |
The success of recent text-to-image diffusion models is largely due to their capacity to be guided by a complex text prompt, which enables users to precisely describe the desired content. However, these models struggle to effectively suppress the generation of undesired content, which is explicitly requested to be omitted from the generated image in the prompt. In this paper, we analyze how to manipulate the text embeddings and remove unwanted content from them. We introduce two contributions, which we refer to as $\textit{soft-weighted regularization}$ and $\textit{inference-time text embedding optimization}$. The first regularizes the text embedding matrix and effectively suppresses the undesired content. The second method aims to further suppress the unwanted content generation of the prompt, and encourages the generation of desired content. We evaluate our method quantitatively and qualitatively on extensive experiments, validating its effectiveness. Furthermore, our method is generalizability to both the pixel-space diffusion models (i.e. DeepFloyd-IF) and the latent-space diffusion models (i.e. Stable Diffusion).
|
2103.05875
|
Michael Stengel
|
Michael Stengel, Zander Majercik, Benjamin Boudaoud, Morgan McGuire
|
A Distributed, Decoupled System for Losslessly Streaming Dynamic Light
Probes to Thin Clients
|
12 pages, 7 figures, 3 tables
| null | null | null |
cs.DC cs.GR
|
http://creativecommons.org/licenses/by-sa/4.0/
|
We present a networked, high performance graphics system that combines
dynamic, high quality, ray traced global illumination computed on a server with
direct illumination and primary visibility computed on a client. This approach
provides many of the image quality benefits of real-time ray tracing on
low-power and legacy hardware, while maintaining a low latency response and
mobile form factor. Our system distributes the graphic pipeline over a network
by computing diffuse global illumination on a remote machine. Global
illumination is computed using a recent irradiance volume representation
combined with a novel, lossless, HEVC-based, hardware-accelerated encoding, and
a perceptually-motivated update scheme. Our experimental implementation streams
thousands of irradiance probes per second and requires less than 50 Mbps of
throughput, reducing the consumed bandwidth by 99.4% when streaming at 60 Hz
compared to traditional lossless texture compression. This bandwidth reduction
allows higher quality and lower latency graphics than state-of-the-art remote
rendering via video streaming. In addition, our split-rendering solution
decouples remote computation from local rendering and so does not limit local
display update rate or resolution.
|
[
{
"created": "Wed, 10 Mar 2021 05:21:03 GMT",
"version": "v1"
}
] |
2021-03-11
|
[
[
"Stengel",
"Michael",
""
],
[
"Majercik",
"Zander",
""
],
[
"Boudaoud",
"Benjamin",
""
],
[
"McGuire",
"Morgan",
""
]
] |
We present a networked, high performance graphics system that combines dynamic, high quality, ray traced global illumination computed on a server with direct illumination and primary visibility computed on a client. This approach provides many of the image quality benefits of real-time ray tracing on low-power and legacy hardware, while maintaining a low latency response and mobile form factor. Our system distributes the graphic pipeline over a network by computing diffuse global illumination on a remote machine. Global illumination is computed using a recent irradiance volume representation combined with a novel, lossless, HEVC-based, hardware-accelerated encoding, and a perceptually-motivated update scheme. Our experimental implementation streams thousands of irradiance probes per second and requires less than 50 Mbps of throughput, reducing the consumed bandwidth by 99.4% when streaming at 60 Hz compared to traditional lossless texture compression. This bandwidth reduction allows higher quality and lower latency graphics than state-of-the-art remote rendering via video streaming. In addition, our split-rendering solution decouples remote computation from local rendering and so does not limit local display update rate or resolution.
|
2106.15195
|
Benjamin Marie
|
Benjamin Marie, Atsushi Fujita, Raphael Rubino
|
Scientific Credibility of Machine Translation Research: A
Meta-Evaluation of 769 Papers
|
Camera-ready for ACL2021
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
This paper presents the first large-scale meta-evaluation of machine
translation (MT). We annotated MT evaluations conducted in 769 research papers
published from 2010 to 2020. Our study shows that practices for automatic MT
evaluation have dramatically changed during the past decade and follow
concerning trends. An increasing number of MT evaluations exclusively rely on
differences between BLEU scores to draw conclusions, without performing any
kind of statistical significance testing nor human evaluation, while at least
108 metrics claiming to be better than BLEU have been proposed. MT evaluations
in recent papers tend to copy and compare automatic metric scores from previous
work to claim the superiority of a method or an algorithm without confirming
neither exactly the same training, validating, and testing data have been used
nor the metric scores are comparable. Furthermore, tools for reporting
standardized metric scores are still far from being widely adopted by the MT
community. After showing how the accumulation of these pitfalls leads to
dubious evaluation, we propose a guideline to encourage better automatic MT
evaluation along with a simple meta-evaluation scoring method to assess its
credibility.
|
[
{
"created": "Tue, 29 Jun 2021 09:30:17 GMT",
"version": "v1"
}
] |
2021-06-30
|
[
[
"Marie",
"Benjamin",
""
],
[
"Fujita",
"Atsushi",
""
],
[
"Rubino",
"Raphael",
""
]
] |
This paper presents the first large-scale meta-evaluation of machine translation (MT). We annotated MT evaluations conducted in 769 research papers published from 2010 to 2020. Our study shows that practices for automatic MT evaluation have dramatically changed during the past decade and follow concerning trends. An increasing number of MT evaluations exclusively rely on differences between BLEU scores to draw conclusions, without performing any kind of statistical significance testing nor human evaluation, while at least 108 metrics claiming to be better than BLEU have been proposed. MT evaluations in recent papers tend to copy and compare automatic metric scores from previous work to claim the superiority of a method or an algorithm without confirming neither exactly the same training, validating, and testing data have been used nor the metric scores are comparable. Furthermore, tools for reporting standardized metric scores are still far from being widely adopted by the MT community. After showing how the accumulation of these pitfalls leads to dubious evaluation, we propose a guideline to encourage better automatic MT evaluation along with a simple meta-evaluation scoring method to assess its credibility.
|
1804.08338
|
Duyu Tang
|
Yibo Sun, Duyu Tang, Nan Duan, Jianshu Ji, Guihong Cao, Xiaocheng
Feng, Bing Qin, Ting Liu, Ming Zhou
|
Semantic Parsing with Syntax- and Table-Aware SQL Generation
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a generative model to map natural language questions into SQL
queries. Existing neural network based approaches typically generate a SQL
query word-by-word, however, a large portion of the generated results are
incorrect or not executable due to the mismatch between question words and
table contents. Our approach addresses this problem by considering the
structure of table and the syntax of SQL language. The quality of the generated
SQL query is significantly improved through (1) learning to replicate content
from column names, cells or SQL keywords; and (2) improving the generation of
WHERE clause by leveraging the column-cell relation. Experiments are conducted
on WikiSQL, a recently released dataset with the largest question-SQL pairs.
Our approach significantly improves the state-of-the-art execution accuracy
from 69.0% to 74.4%.
|
[
{
"created": "Mon, 23 Apr 2018 11:18:47 GMT",
"version": "v1"
}
] |
2018-04-24
|
[
[
"Sun",
"Yibo",
""
],
[
"Tang",
"Duyu",
""
],
[
"Duan",
"Nan",
""
],
[
"Ji",
"Jianshu",
""
],
[
"Cao",
"Guihong",
""
],
[
"Feng",
"Xiaocheng",
""
],
[
"Qin",
"Bing",
""
],
[
"Liu",
"Ting",
""
],
[
"Zhou",
"Ming",
""
]
] |
We present a generative model to map natural language questions into SQL queries. Existing neural network based approaches typically generate a SQL query word-by-word, however, a large portion of the generated results are incorrect or not executable due to the mismatch between question words and table contents. Our approach addresses this problem by considering the structure of table and the syntax of SQL language. The quality of the generated SQL query is significantly improved through (1) learning to replicate content from column names, cells or SQL keywords; and (2) improving the generation of WHERE clause by leveraging the column-cell relation. Experiments are conducted on WikiSQL, a recently released dataset with the largest question-SQL pairs. Our approach significantly improves the state-of-the-art execution accuracy from 69.0% to 74.4%.
|
1212.4777
|
Ankur Moitra
|
Sanjeev Arora, Rong Ge, Yoni Halpern, David Mimno, Ankur Moitra, David
Sontag, Yichen Wu, Michael Zhu
|
A Practical Algorithm for Topic Modeling with Provable Guarantees
|
26 pages
| null | null | null |
cs.LG cs.DS stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Topic models provide a useful method for dimensionality reduction and
exploratory data analysis in large text corpora. Most approaches to topic model
inference have been based on a maximum likelihood objective. Efficient
algorithms exist that approximate this objective, but they have no provable
guarantees. Recently, algorithms have been introduced that provide provable
bounds, but these algorithms are not practical because they are inefficient and
not robust to violations of model assumptions. In this paper we present an
algorithm for topic model inference that is both provable and practical. The
algorithm produces results comparable to the best MCMC implementations while
running orders of magnitude faster.
|
[
{
"created": "Wed, 19 Dec 2012 18:14:51 GMT",
"version": "v1"
}
] |
2012-12-20
|
[
[
"Arora",
"Sanjeev",
""
],
[
"Ge",
"Rong",
""
],
[
"Halpern",
"Yoni",
""
],
[
"Mimno",
"David",
""
],
[
"Moitra",
"Ankur",
""
],
[
"Sontag",
"David",
""
],
[
"Wu",
"Yichen",
""
],
[
"Zhu",
"Michael",
""
]
] |
Topic models provide a useful method for dimensionality reduction and exploratory data analysis in large text corpora. Most approaches to topic model inference have been based on a maximum likelihood objective. Efficient algorithms exist that approximate this objective, but they have no provable guarantees. Recently, algorithms have been introduced that provide provable bounds, but these algorithms are not practical because they are inefficient and not robust to violations of model assumptions. In this paper we present an algorithm for topic model inference that is both provable and practical. The algorithm produces results comparable to the best MCMC implementations while running orders of magnitude faster.
|
2303.15735
|
Jianping Zhang
|
Jianping Zhang, Jen-tse Huang, Wenxuan Wang, Yichen Li, Weibin Wu,
Xiaosen Wang, Yuxin Su, Michael R. Lyu
|
Improving the Transferability of Adversarial Samples by Path-Augmented
Method
|
10 pages + appendix, CVPR 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Deep neural networks have achieved unprecedented success on diverse vision
tasks. However, they are vulnerable to adversarial noise that is imperceptible
to humans. This phenomenon negatively affects their deployment in real-world
scenarios, especially security-related ones. To evaluate the robustness of a
target model in practice, transfer-based attacks craft adversarial samples with
a local model and have attracted increasing attention from researchers due to
their high efficiency. The state-of-the-art transfer-based attacks are
generally based on data augmentation, which typically augments multiple
training images from a linear path when learning adversarial samples. However,
such methods selected the image augmentation path heuristically and may augment
images that are semantics-inconsistent with the target images, which harms the
transferability of the generated adversarial samples. To overcome the pitfall,
we propose the Path-Augmented Method (PAM). Specifically, PAM first constructs
a candidate augmentation path pool. It then settles the employed augmentation
paths during adversarial sample generation with greedy search. Furthermore, to
avoid augmenting semantics-inconsistent images, we train a Semantics Predictor
(SP) to constrain the length of the augmentation path. Extensive experiments
confirm that PAM can achieve an improvement of over 4.8% on average compared
with the state-of-the-art baselines in terms of the attack success rates.
|
[
{
"created": "Tue, 28 Mar 2023 05:14:04 GMT",
"version": "v1"
}
] |
2023-03-29
|
[
[
"Zhang",
"Jianping",
""
],
[
"Huang",
"Jen-tse",
""
],
[
"Wang",
"Wenxuan",
""
],
[
"Li",
"Yichen",
""
],
[
"Wu",
"Weibin",
""
],
[
"Wang",
"Xiaosen",
""
],
[
"Su",
"Yuxin",
""
],
[
"Lyu",
"Michael R.",
""
]
] |
Deep neural networks have achieved unprecedented success on diverse vision tasks. However, they are vulnerable to adversarial noise that is imperceptible to humans. This phenomenon negatively affects their deployment in real-world scenarios, especially security-related ones. To evaluate the robustness of a target model in practice, transfer-based attacks craft adversarial samples with a local model and have attracted increasing attention from researchers due to their high efficiency. The state-of-the-art transfer-based attacks are generally based on data augmentation, which typically augments multiple training images from a linear path when learning adversarial samples. However, such methods selected the image augmentation path heuristically and may augment images that are semantics-inconsistent with the target images, which harms the transferability of the generated adversarial samples. To overcome the pitfall, we propose the Path-Augmented Method (PAM). Specifically, PAM first constructs a candidate augmentation path pool. It then settles the employed augmentation paths during adversarial sample generation with greedy search. Furthermore, to avoid augmenting semantics-inconsistent images, we train a Semantics Predictor (SP) to constrain the length of the augmentation path. Extensive experiments confirm that PAM can achieve an improvement of over 4.8% on average compared with the state-of-the-art baselines in terms of the attack success rates.
|
1206.1355
|
Xiaowen Gong
|
Xiaowen Gong, Junshan Zhang, Douglas Cochran
|
A Coverage Theory of Bistatic Radar Networks: Worst-Case Intrusion Path
and Optimal Deployment
|
12 pages
| null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we study optimal radar deployment for intrusion detection,
with focus on network coverage. In contrast to the disk-based sensing model in
a traditional sensor network, the detection range of a bistatic radar depends
on the locations of both the radar transmitter and radar receiver, and is
characterized by Cassini ovals. Furthermore, in a network with multiple radar
transmitters and receivers, since any pair of transmitter and receiver can
potentially form a bistatic radar, the detection ranges of different bistatic
radars are coupled and the corresponding network coverage is intimately related
to the locations of all transmitters and receivers, making the optimal
deployment design highly non-trivial. Clearly, the detectability of an intruder
depends on the highest SNR received by all possible bistatic radars. We focus
on the worst-case intrusion detectability, i.e., the minimum possible
detectability along all possible intrusion paths. Although it is plausible to
deploy radars on a shortest line segment across the field, it is not always
optimal in general, which we illustrate via counter-examples. We then present a
sufficient condition on the field geometry for the optimality of shortest line
deployment to hold. Further, we quantify the local structure of detectability
corresponding to a given deployment order and spacings of radar transmitters
and receivers, building on which we characterize the optimal deployment to
maximize the worst-case intrusion detectability. Our results show that the
optimal deployment locations exhibit a balanced structure. We also develop a
polynomial-time approximation algorithm for characterizing the worse-case
intrusion path for any given locations of radars under random deployment.
|
[
{
"created": "Wed, 6 Jun 2012 21:33:06 GMT",
"version": "v1"
}
] |
2012-06-08
|
[
[
"Gong",
"Xiaowen",
""
],
[
"Zhang",
"Junshan",
""
],
[
"Cochran",
"Douglas",
""
]
] |
In this paper, we study optimal radar deployment for intrusion detection, with focus on network coverage. In contrast to the disk-based sensing model in a traditional sensor network, the detection range of a bistatic radar depends on the locations of both the radar transmitter and radar receiver, and is characterized by Cassini ovals. Furthermore, in a network with multiple radar transmitters and receivers, since any pair of transmitter and receiver can potentially form a bistatic radar, the detection ranges of different bistatic radars are coupled and the corresponding network coverage is intimately related to the locations of all transmitters and receivers, making the optimal deployment design highly non-trivial. Clearly, the detectability of an intruder depends on the highest SNR received by all possible bistatic radars. We focus on the worst-case intrusion detectability, i.e., the minimum possible detectability along all possible intrusion paths. Although it is plausible to deploy radars on a shortest line segment across the field, it is not always optimal in general, which we illustrate via counter-examples. We then present a sufficient condition on the field geometry for the optimality of shortest line deployment to hold. Further, we quantify the local structure of detectability corresponding to a given deployment order and spacings of radar transmitters and receivers, building on which we characterize the optimal deployment to maximize the worst-case intrusion detectability. Our results show that the optimal deployment locations exhibit a balanced structure. We also develop a polynomial-time approximation algorithm for characterizing the worse-case intrusion path for any given locations of radars under random deployment.
|
1707.07278
|
Besnik Fetahu
|
Besnik Fetahu and Katja Markert and Avishek Anand
|
Fine Grained Citation Span for References in Wikipedia
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
\emph{Verifiability} is one of the core editing principles in Wikipedia,
editors being encouraged to provide citations for the added content. For a
Wikipedia article, determining the \emph{citation span} of a citation, i.e.
what content is covered by a citation, is important as it helps decide for
which content citations are still missing.
We are the first to address the problem of determining the \emph{citation
span} in Wikipedia articles. We approach this problem by classifying which
textual fragments in an article are covered by a citation. We propose a
sequence classification approach where for a paragraph and a citation, we
determine the citation span at a fine-grained level.
We provide a thorough experimental evaluation and compare our approach
against baselines adopted from the scientific domain, where we show improvement
for all evaluation metrics.
|
[
{
"created": "Sun, 23 Jul 2017 10:43:26 GMT",
"version": "v1"
}
] |
2017-07-25
|
[
[
"Fetahu",
"Besnik",
""
],
[
"Markert",
"Katja",
""
],
[
"Anand",
"Avishek",
""
]
] |
\emph{Verifiability} is one of the core editing principles in Wikipedia, editors being encouraged to provide citations for the added content. For a Wikipedia article, determining the \emph{citation span} of a citation, i.e. what content is covered by a citation, is important as it helps decide for which content citations are still missing. We are the first to address the problem of determining the \emph{citation span} in Wikipedia articles. We approach this problem by classifying which textual fragments in an article are covered by a citation. We propose a sequence classification approach where for a paragraph and a citation, we determine the citation span at a fine-grained level. We provide a thorough experimental evaluation and compare our approach against baselines adopted from the scientific domain, where we show improvement for all evaluation metrics.
|
1907.05473
|
Nikhil Bansal
|
Nikhil Bansal and Jatin Batra
|
Non-uniform Geometric Set Cover and Scheduling on Multiple Machines
| null | null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We consider the following general scheduling problem studied recently by
Moseley. There are $n$ jobs, all released at time $0$, where job $j$ has size
$p_j$ and an associated arbitrary non-decreasing cost function $f_j$ of its
completion time. The goal is to find a schedule on $m$ machines with minimum
total cost. We give an $O(1)$ approximation for the problem, improving upon the
previous $O(\log \log nP)$ bound ($P$ is the maximum to minimum size ratio),
and resolving the open question of Moseley.
We first note that the scheduling problem can be reduced to a clean geometric
set cover problem where points on a line with arbitrary demands, must be
covered by a minimum cost collection of given intervals with non-uniform
capacity profiles. Unfortunately, current techniques for such problems based on
knapsack cover inequalities and low union complexity, completely lose the
geometric structure in the non-uniform capacity profiles and incur at least an
$\Omega(\log\log P)$ loss.
To this end, we consider general covering problems with non-uniform
capacities, and give a new method to handle capacities in a way that completely
preserves their geometric structure. This allows us to use sophisticated
geometric ideas in a black-box way to avoid the $\Omega(\log \log P)$ loss in
previous approaches. In addition to the scheduling problem above, we use this
approach to obtain $O(1)$ or inverse Ackermann type bounds for several basic
capacitated covering problems.
|
[
{
"created": "Thu, 11 Jul 2019 20:10:16 GMT",
"version": "v1"
},
{
"created": "Fri, 17 Jul 2020 18:40:48 GMT",
"version": "v2"
}
] |
2020-07-21
|
[
[
"Bansal",
"Nikhil",
""
],
[
"Batra",
"Jatin",
""
]
] |
We consider the following general scheduling problem studied recently by Moseley. There are $n$ jobs, all released at time $0$, where job $j$ has size $p_j$ and an associated arbitrary non-decreasing cost function $f_j$ of its completion time. The goal is to find a schedule on $m$ machines with minimum total cost. We give an $O(1)$ approximation for the problem, improving upon the previous $O(\log \log nP)$ bound ($P$ is the maximum to minimum size ratio), and resolving the open question of Moseley. We first note that the scheduling problem can be reduced to a clean geometric set cover problem where points on a line with arbitrary demands, must be covered by a minimum cost collection of given intervals with non-uniform capacity profiles. Unfortunately, current techniques for such problems based on knapsack cover inequalities and low union complexity, completely lose the geometric structure in the non-uniform capacity profiles and incur at least an $\Omega(\log\log P)$ loss. To this end, we consider general covering problems with non-uniform capacities, and give a new method to handle capacities in a way that completely preserves their geometric structure. This allows us to use sophisticated geometric ideas in a black-box way to avoid the $\Omega(\log \log P)$ loss in previous approaches. In addition to the scheduling problem above, we use this approach to obtain $O(1)$ or inverse Ackermann type bounds for several basic capacitated covering problems.
|
2302.01526
|
Shin-Nosuke Ishikawa
|
Shin-nosuke Ishikawa, Masato Todo, Masato Taki, Yasunobu Uchiyama,
Kazunari Matsunaga, Peihsuan Lin, Taiki Ogihara, Masao Yasui
|
Example-Based Explainable AI and its Application for Remote Sensing
Image Classification
|
10 pages, 4 figures, accepted for publication in International
Journal of Applied Earth Observation and Geoinformation
| null | null | null |
cs.AI cs.CV cs.LG physics.geo-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a method of explainable artificial intelligence (XAI), "What I
Know (WIK)", to provide additional information to verify the reliability of a
deep learning model by showing an example of an instance in a training dataset
that is similar to the input data to be inferred and demonstrate it in a remote
sensing image classification task. One of the expected roles of XAI methods is
verifying whether inferences of a trained machine learning model are valid for
an application, and it is an important factor that what datasets are used for
training the model as well as the model architecture. Our data-centric approach
can help determine whether the training dataset is sufficient for each
inference by checking the selected example data. If the selected example looks
similar to the input data, we can confirm that the model was not trained on a
dataset with a feature distribution far from the feature of the input data.
With this method, the criteria for selecting an example are not merely data
similarity with the input data but also data similarity in the context of the
model task. Using a remote sensing image dataset from the Sentinel-2 satellite,
the concept was successfully demonstrated with reasonably selected examples.
This method can be applied to various machine-learning tasks, including
classification and regression.
|
[
{
"created": "Fri, 3 Feb 2023 03:48:43 GMT",
"version": "v1"
}
] |
2023-02-06
|
[
[
"Ishikawa",
"Shin-nosuke",
""
],
[
"Todo",
"Masato",
""
],
[
"Taki",
"Masato",
""
],
[
"Uchiyama",
"Yasunobu",
""
],
[
"Matsunaga",
"Kazunari",
""
],
[
"Lin",
"Peihsuan",
""
],
[
"Ogihara",
"Taiki",
""
],
[
"Yasui",
"Masao",
""
]
] |
We present a method of explainable artificial intelligence (XAI), "What I Know (WIK)", to provide additional information to verify the reliability of a deep learning model by showing an example of an instance in a training dataset that is similar to the input data to be inferred and demonstrate it in a remote sensing image classification task. One of the expected roles of XAI methods is verifying whether inferences of a trained machine learning model are valid for an application, and it is an important factor that what datasets are used for training the model as well as the model architecture. Our data-centric approach can help determine whether the training dataset is sufficient for each inference by checking the selected example data. If the selected example looks similar to the input data, we can confirm that the model was not trained on a dataset with a feature distribution far from the feature of the input data. With this method, the criteria for selecting an example are not merely data similarity with the input data but also data similarity in the context of the model task. Using a remote sensing image dataset from the Sentinel-2 satellite, the concept was successfully demonstrated with reasonably selected examples. This method can be applied to various machine-learning tasks, including classification and regression.
|
1512.01030
|
V S R Veeravasarapu
|
V S R Veeravasarapu, Rudra Narayan Hota, Constantin Rothkopf, and
Ramesh Visvanathan
|
Simulations for Validation of Vision Systems
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As the computer vision matures into a systems science and engineering
discipline, there is a trend in leveraging latest advances in computer graphics
simulations for performance evaluation, learning, and inference. However, there
is an open question on the utility of graphics simulations for vision with
apparently contradicting views in the literature. In this paper, we place the
results from the recent literature in the context of performance
characterization methodology outlined in the 90's and note that insights
derived from simulations can be qualitative or quantitative depending on the
degree of fidelity of models used in simulation and the nature of the question
posed by the experimenter. We describe a simulation platform that incorporates
latest graphics advances and use it for systematic performance characterization
and trade-off analysis for vision system design. We verify the utility of the
platform in a case study of validating a generative model inspired vision
hypothesis, Rank-Order consistency model, in the contexts of global and local
illumination changes, and bad weather, and high-frequency noise. Our approach
establishes the link between alternative viewpoints, involving models with
physics based semantics and signal and perturbation semantics and confirms
insights in literature on robust change detection.
|
[
{
"created": "Thu, 3 Dec 2015 10:53:32 GMT",
"version": "v1"
}
] |
2015-12-04
|
[
[
"Veeravasarapu",
"V S R",
""
],
[
"Hota",
"Rudra Narayan",
""
],
[
"Rothkopf",
"Constantin",
""
],
[
"Visvanathan",
"Ramesh",
""
]
] |
As the computer vision matures into a systems science and engineering discipline, there is a trend in leveraging latest advances in computer graphics simulations for performance evaluation, learning, and inference. However, there is an open question on the utility of graphics simulations for vision with apparently contradicting views in the literature. In this paper, we place the results from the recent literature in the context of performance characterization methodology outlined in the 90's and note that insights derived from simulations can be qualitative or quantitative depending on the degree of fidelity of models used in simulation and the nature of the question posed by the experimenter. We describe a simulation platform that incorporates latest graphics advances and use it for systematic performance characterization and trade-off analysis for vision system design. We verify the utility of the platform in a case study of validating a generative model inspired vision hypothesis, Rank-Order consistency model, in the contexts of global and local illumination changes, and bad weather, and high-frequency noise. Our approach establishes the link between alternative viewpoints, involving models with physics based semantics and signal and perturbation semantics and confirms insights in literature on robust change detection.
|
2210.01240
|
Abulhair Saparov
|
Abulhair Saparov and He He
|
Language Models Are Greedy Reasoners: A Systematic Formal Analysis of
Chain-of-Thought
|
Published as a conference paper at ICLR 2023
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Large language models (LLMs) have shown remarkable reasoning capabilities
given chain-of-thought prompts (examples with intermediate reasoning steps).
Existing benchmarks measure reasoning ability indirectly, by evaluating
accuracy on downstream tasks such as mathematical reasoning. However, it is
unclear how these models obtain the answers and whether they rely on simple
heuristics rather than the generated chain-of-thought. To enable systematic
exploration of the reasoning ability of LLMs, we present a new synthetic
question-answering dataset called PrOntoQA, where each example is generated
from a synthetic world model represented in first-order logic. This allows us
to parse the generated chain-of-thought into symbolic proofs for formal
analysis. Our analysis on InstructGPT and GPT-3 shows that LLMs are quite
capable of making correct individual deduction steps, and so are generally
capable of reasoning, even in fictional contexts. However, they have difficulty
with proof planning: When multiple valid deduction steps are available, they
are not able to systematically explore the different options.
|
[
{
"created": "Mon, 3 Oct 2022 21:34:32 GMT",
"version": "v1"
},
{
"created": "Wed, 25 Jan 2023 05:33:23 GMT",
"version": "v2"
},
{
"created": "Thu, 26 Jan 2023 02:18:52 GMT",
"version": "v3"
},
{
"created": "Thu, 2 Mar 2023 03:54:28 GMT",
"version": "v4"
}
] |
2023-03-03
|
[
[
"Saparov",
"Abulhair",
""
],
[
"He",
"He",
""
]
] |
Large language models (LLMs) have shown remarkable reasoning capabilities given chain-of-thought prompts (examples with intermediate reasoning steps). Existing benchmarks measure reasoning ability indirectly, by evaluating accuracy on downstream tasks such as mathematical reasoning. However, it is unclear how these models obtain the answers and whether they rely on simple heuristics rather than the generated chain-of-thought. To enable systematic exploration of the reasoning ability of LLMs, we present a new synthetic question-answering dataset called PrOntoQA, where each example is generated from a synthetic world model represented in first-order logic. This allows us to parse the generated chain-of-thought into symbolic proofs for formal analysis. Our analysis on InstructGPT and GPT-3 shows that LLMs are quite capable of making correct individual deduction steps, and so are generally capable of reasoning, even in fictional contexts. However, they have difficulty with proof planning: When multiple valid deduction steps are available, they are not able to systematically explore the different options.
|
1811.09845
|
Shikhar Sharma
|
Alaaeldin El-Nouby, Shikhar Sharma, Hannes Schulz, Devon Hjelm, Layla
El Asri, Samira Ebrahimi Kahou, Yoshua Bengio, Graham W.Taylor
|
Tell, Draw, and Repeat: Generating and Modifying Images Based on
Continual Linguistic Instruction
|
Accepted at ICCV 2019
|
Proceedings of the 2019 IEEE International Conference on Computer
Vision (ICCV)
| null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Conditional text-to-image generation is an active area of research, with many
possible applications. Existing research has primarily focused on generating a
single image from available conditioning information in one step. One practical
extension beyond one-step generation is a system that generates an image
iteratively, conditioned on ongoing linguistic input or feedback. This is
significantly more challenging than one-step generation tasks, as such a system
must understand the contents of its generated images with respect to the
feedback history, the current feedback, as well as the interactions among
concepts present in the feedback history. In this work, we present a recurrent
image generation model which takes into account both the generated output up to
the current step as well as all past instructions for generation. We show that
our model is able to generate the background, add new objects, and apply simple
transformations to existing objects. We believe our approach is an important
step toward interactive generation. Code and data is available at:
https://www.microsoft.com/en-us/research/project/generative-neural-visual-artist-geneva/ .
|
[
{
"created": "Sat, 24 Nov 2018 14:42:18 GMT",
"version": "v1"
},
{
"created": "Mon, 1 Apr 2019 17:34:25 GMT",
"version": "v2"
},
{
"created": "Mon, 23 Sep 2019 15:14:05 GMT",
"version": "v3"
}
] |
2019-09-24
|
[
[
"El-Nouby",
"Alaaeldin",
""
],
[
"Sharma",
"Shikhar",
""
],
[
"Schulz",
"Hannes",
""
],
[
"Hjelm",
"Devon",
""
],
[
"Asri",
"Layla El",
""
],
[
"Kahou",
"Samira Ebrahimi",
""
],
[
"Bengio",
"Yoshua",
""
],
[
"Taylor",
"Graham W.",
""
]
] |
Conditional text-to-image generation is an active area of research, with many possible applications. Existing research has primarily focused on generating a single image from available conditioning information in one step. One practical extension beyond one-step generation is a system that generates an image iteratively, conditioned on ongoing linguistic input or feedback. This is significantly more challenging than one-step generation tasks, as such a system must understand the contents of its generated images with respect to the feedback history, the current feedback, as well as the interactions among concepts present in the feedback history. In this work, we present a recurrent image generation model which takes into account both the generated output up to the current step as well as all past instructions for generation. We show that our model is able to generate the background, add new objects, and apply simple transformations to existing objects. We believe our approach is an important step toward interactive generation. Code and data is available at: https://www.microsoft.com/en-us/research/project/generative-neural-visual-artist-geneva/ .
|
2304.05544
|
Vikas Natesh
|
Andrew Sabot, Vikas Natesh, H.T. Kung, Wei-Te Ting
|
MEMA Runtime Framework: Minimizing External Memory Accesses for TinyML
on Microcontrollers
|
Accepted as a full paper by the TinyML Research Symposium 2023
| null | null | null |
cs.LG cs.AR cs.PF cs.PL
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
We present the MEMA framework for the easy and quick derivation of efficient
inference runtimes that minimize external memory accesses for matrix
multiplication on TinyML systems. The framework accounts for hardware resource
constraints and problem sizes in analytically determining optimized schedules
and kernels that minimize memory accesses. MEMA provides a solution to a
well-known problem in the current practice, that is, optimal schedules tend to
be found only through a time consuming and heuristic search of a large
scheduling space. We compare the performance of runtimes derived from MEMA to
existing state-of-the-art libraries on ARM-based TinyML systems. For example,
for neural network benchmarks on the ARM Cortex-M4, we achieve up to a 1.8x
speedup and 44% energy reduction over CMSIS-NN.
|
[
{
"created": "Wed, 12 Apr 2023 00:27:11 GMT",
"version": "v1"
}
] |
2023-04-13
|
[
[
"Sabot",
"Andrew",
""
],
[
"Natesh",
"Vikas",
""
],
[
"Kung",
"H. T.",
""
],
[
"Ting",
"Wei-Te",
""
]
] |
We present the MEMA framework for the easy and quick derivation of efficient inference runtimes that minimize external memory accesses for matrix multiplication on TinyML systems. The framework accounts for hardware resource constraints and problem sizes in analytically determining optimized schedules and kernels that minimize memory accesses. MEMA provides a solution to a well-known problem in the current practice, that is, optimal schedules tend to be found only through a time consuming and heuristic search of a large scheduling space. We compare the performance of runtimes derived from MEMA to existing state-of-the-art libraries on ARM-based TinyML systems. For example, for neural network benchmarks on the ARM Cortex-M4, we achieve up to a 1.8x speedup and 44% energy reduction over CMSIS-NN.
|
1804.09003
|
Zhuoyao Zhong
|
Zhuoyao Zhong, Lei Sun and Qiang Huo
|
An Anchor-Free Region Proposal Network for Faster R-CNN based Text
Detection Approaches
|
Technical report
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The anchor mechanism of Faster R-CNN and SSD framework is considered not
effective enough to scene text detection, which can be attributed to its IoU
based matching criterion between anchors and ground-truth boxes. In order to
better enclose scene text instances of various shapes, it requires to design
anchors of various scales, aspect ratios and even orientations manually, which
makes anchor-based methods sophisticated and inefficient. In this paper, we
propose a novel anchor-free region proposal network (AF-RPN) to replace the
original anchor-based RPN in the Faster R-CNN framework to address the above
problem. Compared with a vanilla RPN and FPN-RPN, AF-RPN can get rid of
complicated anchor design and achieve higher recall rate on large-scale
COCO-Text dataset. Owing to the high-quality text proposals, our Faster R-CNN
based two-stage text detection approach achieves state-of-the-art results on
ICDAR-2017 MLT, ICDAR-2015 and ICDAR-2013 text detection benchmarks when using
single-scale and single-model (ResNet50) testing only.
|
[
{
"created": "Tue, 24 Apr 2018 13:08:32 GMT",
"version": "v1"
}
] |
2018-04-25
|
[
[
"Zhong",
"Zhuoyao",
""
],
[
"Sun",
"Lei",
""
],
[
"Huo",
"Qiang",
""
]
] |
The anchor mechanism of Faster R-CNN and SSD framework is considered not effective enough to scene text detection, which can be attributed to its IoU based matching criterion between anchors and ground-truth boxes. In order to better enclose scene text instances of various shapes, it requires to design anchors of various scales, aspect ratios and even orientations manually, which makes anchor-based methods sophisticated and inefficient. In this paper, we propose a novel anchor-free region proposal network (AF-RPN) to replace the original anchor-based RPN in the Faster R-CNN framework to address the above problem. Compared with a vanilla RPN and FPN-RPN, AF-RPN can get rid of complicated anchor design and achieve higher recall rate on large-scale COCO-Text dataset. Owing to the high-quality text proposals, our Faster R-CNN based two-stage text detection approach achieves state-of-the-art results on ICDAR-2017 MLT, ICDAR-2015 and ICDAR-2013 text detection benchmarks when using single-scale and single-model (ResNet50) testing only.
|
2006.04152
|
Canwen Xu
|
Wangchunshu Zhou and Canwen Xu and Tao Ge and Julian McAuley and Ke Xu
and Furu Wei
|
BERT Loses Patience: Fast and Robust Inference with Early Exit
|
NeurIPS 2020
| null | null | null |
cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we propose Patience-based Early Exit, a straightforward yet
effective inference method that can be used as a plug-and-play technique to
simultaneously improve the efficiency and robustness of a pretrained language
model (PLM). To achieve this, our approach couples an internal-classifier with
each layer of a PLM and dynamically stops inference when the intermediate
predictions of the internal classifiers remain unchanged for a pre-defined
number of steps. Our approach improves inference efficiency as it allows the
model to make a prediction with fewer layers. Meanwhile, experimental results
with an ALBERT model show that our method can improve the accuracy and
robustness of the model by preventing it from overthinking and exploiting
multiple classifiers for prediction, yielding a better accuracy-speed trade-off
compared to existing early exit methods.
|
[
{
"created": "Sun, 7 Jun 2020 13:38:32 GMT",
"version": "v1"
},
{
"created": "Mon, 29 Jun 2020 04:46:19 GMT",
"version": "v2"
},
{
"created": "Thu, 22 Oct 2020 06:37:36 GMT",
"version": "v3"
}
] |
2020-10-23
|
[
[
"Zhou",
"Wangchunshu",
""
],
[
"Xu",
"Canwen",
""
],
[
"Ge",
"Tao",
""
],
[
"McAuley",
"Julian",
""
],
[
"Xu",
"Ke",
""
],
[
"Wei",
"Furu",
""
]
] |
In this paper, we propose Patience-based Early Exit, a straightforward yet effective inference method that can be used as a plug-and-play technique to simultaneously improve the efficiency and robustness of a pretrained language model (PLM). To achieve this, our approach couples an internal-classifier with each layer of a PLM and dynamically stops inference when the intermediate predictions of the internal classifiers remain unchanged for a pre-defined number of steps. Our approach improves inference efficiency as it allows the model to make a prediction with fewer layers. Meanwhile, experimental results with an ALBERT model show that our method can improve the accuracy and robustness of the model by preventing it from overthinking and exploiting multiple classifiers for prediction, yielding a better accuracy-speed trade-off compared to existing early exit methods.
|
1512.07250
|
Daniele Rotolo
|
Alexander M. Petersen, Daniele Rotolo, and Loet Leydesdorff
|
A Triple Helix Model of Medical Innovation: Supply, Demand, and
Technological Capabilities in terms of Medical Subject Headings
|
Accepted for publication in Research Policy (in press)
|
Research Policy 45(3), 666-681 (2016)
|
10.1016/j.respol.2015.12.004
| null |
cs.DL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We develop a model of innovation that enables us to trace the interplay among
three key dimensions of the innovation process: (i) demand of and (ii) supply
for innovation, and (iii) technological capabilities available to generate
innovation in the forms of products, processes, and services. Building on
triple helix research, we use entropy statistics to elaborate an indicator of
mutual information among these dimensions that can provide indication of
reduction of uncertainty. To do so, we focus on the medical context, where
uncertainty poses significant challenges to the governance of innovation. We
use the Medical Subject Headings (MeSH) of MEDLINE/PubMed to identify
publications classified within the categories "Diseases" (C), "Drugs and
Chemicals" (D), "Analytic, Diagnostic, and Therapeutic Techniques and
Equipment" (E) and use these as knowledge representations of demand, supply,
and technological capabilities, respectively. Three case-studies of medical
research areas are used as representative 'entry perspectives' of the medical
innovation process. These are: (i) human papilloma virus, (ii) RNA
interference, and (iii) magnetic resonance imaging. We find statistically
significant periods of synergy among demand, supply, and technological
capabilities (C-D-E) that point to three-dimensional interactions as a
fundamental perspective for the understanding and governance of the uncertainty
associated with medical innovation. Among the pairwise configurations in these
contexts, the demand-technological capabilities (C-E) provided the strongest
link, followed by the supply-demand (D-C) and the supply-technological
capabilities (D-E) channels.
|
[
{
"created": "Tue, 22 Dec 2015 20:58:25 GMT",
"version": "v1"
},
{
"created": "Mon, 4 Jan 2016 13:14:39 GMT",
"version": "v2"
}
] |
2019-12-17
|
[
[
"Petersen",
"Alexander M.",
""
],
[
"Rotolo",
"Daniele",
""
],
[
"Leydesdorff",
"Loet",
""
]
] |
We develop a model of innovation that enables us to trace the interplay among three key dimensions of the innovation process: (i) demand of and (ii) supply for innovation, and (iii) technological capabilities available to generate innovation in the forms of products, processes, and services. Building on triple helix research, we use entropy statistics to elaborate an indicator of mutual information among these dimensions that can provide indication of reduction of uncertainty. To do so, we focus on the medical context, where uncertainty poses significant challenges to the governance of innovation. We use the Medical Subject Headings (MeSH) of MEDLINE/PubMed to identify publications classified within the categories "Diseases" (C), "Drugs and Chemicals" (D), "Analytic, Diagnostic, and Therapeutic Techniques and Equipment" (E) and use these as knowledge representations of demand, supply, and technological capabilities, respectively. Three case-studies of medical research areas are used as representative 'entry perspectives' of the medical innovation process. These are: (i) human papilloma virus, (ii) RNA interference, and (iii) magnetic resonance imaging. We find statistically significant periods of synergy among demand, supply, and technological capabilities (C-D-E) that point to three-dimensional interactions as a fundamental perspective for the understanding and governance of the uncertainty associated with medical innovation. Among the pairwise configurations in these contexts, the demand-technological capabilities (C-E) provided the strongest link, followed by the supply-demand (D-C) and the supply-technological capabilities (D-E) channels.
|
2312.11538
|
Purvi Goel
|
Purvi Goel, Kuan-Chieh Wang, C. Karen Liu, Kayvon Fatahalian
|
Iterative Motion Editing with Natural Language
| null | null |
10.1145/3641519.3657447
| null |
cs.GR cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Text-to-motion diffusion models can generate realistic animations from text
prompts, but do not support fine-grained motion editing controls. In this
paper, we present a method for using natural language to iteratively specify
local edits to existing character animations, a task that is common in most
computer animation workflows. Our key idea is to represent a space of motion
edits using a set of kinematic motion editing operators (MEOs) whose effects on
the source motion is well-aligned with user expectations. We provide an
algorithm that leverages pre-existing language models to translate textual
descriptions of motion edits into source code for programs that define and
execute sequences of MEOs on a source animation. We execute MEOs by first
translating them into keyframe constraints, and then use diffusion-based motion
models to generate output motions that respect these constraints. Through a
user study and quantitative evaluation, we demonstrate that our system can
perform motion edits that respect the animator's editing intent, remain
faithful to the original animation (it edits the original animation, but does
not dramatically change it), and yield realistic character animation results.
|
[
{
"created": "Fri, 15 Dec 2023 22:38:24 GMT",
"version": "v1"
},
{
"created": "Mon, 3 Jun 2024 14:42:35 GMT",
"version": "v2"
}
] |
2024-06-04
|
[
[
"Goel",
"Purvi",
""
],
[
"Wang",
"Kuan-Chieh",
""
],
[
"Liu",
"C. Karen",
""
],
[
"Fatahalian",
"Kayvon",
""
]
] |
Text-to-motion diffusion models can generate realistic animations from text prompts, but do not support fine-grained motion editing controls. In this paper, we present a method for using natural language to iteratively specify local edits to existing character animations, a task that is common in most computer animation workflows. Our key idea is to represent a space of motion edits using a set of kinematic motion editing operators (MEOs) whose effects on the source motion is well-aligned with user expectations. We provide an algorithm that leverages pre-existing language models to translate textual descriptions of motion edits into source code for programs that define and execute sequences of MEOs on a source animation. We execute MEOs by first translating them into keyframe constraints, and then use diffusion-based motion models to generate output motions that respect these constraints. Through a user study and quantitative evaluation, we demonstrate that our system can perform motion edits that respect the animator's editing intent, remain faithful to the original animation (it edits the original animation, but does not dramatically change it), and yield realistic character animation results.
|
1902.00771
|
Adi Botea
|
Adi Botea, Christian Muise, Shubham Agarwal, Oznur Alkan, Ondrej
Bajgar, Elizabeth Daly, Akihiro Kishimoto, Luis Lastras, Radu Marinescu,
Josef Ondrej, Pablo Pedemonte, Miroslav Vodolan
|
Generating Dialogue Agents via Automated Planning
|
Accepted at the AAAI-2019 DEEP-DIAL workshop
| null | null | null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Dialogue systems have many applications such as customer support or question
answering. Typically they have been limited to shallow single turn
interactions. However more advanced applications such as career coaching or
planning a trip require a much more complex multi-turn dialogue. Current
limitations of conversational systems have made it difficult to support
applications that require personalization, customization and context dependent
interactions. We tackle this challenging problem by using domain-independent AI
planning to automatically create dialogue plans, customized to guide a dialogue
towards achieving a given goal. The input includes a library of atomic dialogue
actions, an initial state of the dialogue, and a goal. Dialogue plans are
plugged into a dialogue system capable to orchestrate their execution. Use
cases demonstrate the viability of the approach. Our work on dialogue planning
has been integrated into a product, and it is in the process of being deployed
into another.
|
[
{
"created": "Sat, 2 Feb 2019 19:23:30 GMT",
"version": "v1"
}
] |
2019-02-05
|
[
[
"Botea",
"Adi",
""
],
[
"Muise",
"Christian",
""
],
[
"Agarwal",
"Shubham",
""
],
[
"Alkan",
"Oznur",
""
],
[
"Bajgar",
"Ondrej",
""
],
[
"Daly",
"Elizabeth",
""
],
[
"Kishimoto",
"Akihiro",
""
],
[
"Lastras",
"Luis",
""
],
[
"Marinescu",
"Radu",
""
],
[
"Ondrej",
"Josef",
""
],
[
"Pedemonte",
"Pablo",
""
],
[
"Vodolan",
"Miroslav",
""
]
] |
Dialogue systems have many applications such as customer support or question answering. Typically they have been limited to shallow single turn interactions. However more advanced applications such as career coaching or planning a trip require a much more complex multi-turn dialogue. Current limitations of conversational systems have made it difficult to support applications that require personalization, customization and context dependent interactions. We tackle this challenging problem by using domain-independent AI planning to automatically create dialogue plans, customized to guide a dialogue towards achieving a given goal. The input includes a library of atomic dialogue actions, an initial state of the dialogue, and a goal. Dialogue plans are plugged into a dialogue system capable to orchestrate their execution. Use cases demonstrate the viability of the approach. Our work on dialogue planning has been integrated into a product, and it is in the process of being deployed into another.
|
2407.10473
|
N. Ege Sara\c{c}
|
Thomas A. Henzinger, Nicolas Mazzocchi, N. Ege Sara\c{c}
|
Strategic Dominance: A New Preorder for Nondeterministic Processes
|
To appear in CONCUR 2024
| null | null | null |
cs.LO
|
http://creativecommons.org/licenses/by/4.0/
|
We study the following refinement relation between nondeterministic
state-transition models: model B strategically dominates model A iff every
deterministic refinement of A is language contained in some deterministic
refinement of B. While language containment is trace inclusion, and the (fair)
simulation preorder coincides with tree inclusion, strategic dominance falls
strictly between the two and can be characterized as "strategy inclusion"
between A and B: every strategy that resolves the nondeterminism of A is
dominated by a strategy that resolves the nondeterminism of B. Strategic
dominance can be checked in 2-ExpTime by a decidable first-order Presburger
logic with quantification over words and strategies, called resolver logic. We
give several other applications of resolver logic, including checking the
co-safety, co-liveness, and history-determinism of boolean and quantitative
automata, and checking the inclusion between hyperproperties that are specified
by nondeterministic boolean and quantitative automata.
|
[
{
"created": "Mon, 15 Jul 2024 07:00:58 GMT",
"version": "v1"
}
] |
2024-07-16
|
[
[
"Henzinger",
"Thomas A.",
""
],
[
"Mazzocchi",
"Nicolas",
""
],
[
"Saraç",
"N. Ege",
""
]
] |
We study the following refinement relation between nondeterministic state-transition models: model B strategically dominates model A iff every deterministic refinement of A is language contained in some deterministic refinement of B. While language containment is trace inclusion, and the (fair) simulation preorder coincides with tree inclusion, strategic dominance falls strictly between the two and can be characterized as "strategy inclusion" between A and B: every strategy that resolves the nondeterminism of A is dominated by a strategy that resolves the nondeterminism of B. Strategic dominance can be checked in 2-ExpTime by a decidable first-order Presburger logic with quantification over words and strategies, called resolver logic. We give several other applications of resolver logic, including checking the co-safety, co-liveness, and history-determinism of boolean and quantitative automata, and checking the inclusion between hyperproperties that are specified by nondeterministic boolean and quantitative automata.
|
2106.12893
|
Thomas Viehmann
|
Thomas Viehmann
|
Partial Wasserstein and Maximum Mean Discrepancy distances for bridging
the gap between outlier detection and drift detection
| null | null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With the rise of machine learning and deep learning based applications in
practice, monitoring, i.e. verifying that these operate within specification,
has become an important practical problem. An important aspect of this
monitoring is to check whether the inputs (or intermediates) have strayed from
the distribution they were validated for, which can void the performance
assurances obtained during testing.
There are two common approaches for this. The, perhaps, more classical one is
outlier detection or novelty detection, where, for a single input we ask
whether it is an outlier, i.e. exceedingly unlikely to have originated from a
reference distribution. The second, perhaps more recent approach, is to
consider a larger number of inputs and compare its distribution to a reference
distribution (e.g. sampled during testing). This is done under the label drift
detection.
In this work, we bridge the gap between outlier detection and drift detection
through comparing a given number of inputs to an automatically chosen part of
the reference distribution.
|
[
{
"created": "Wed, 9 Jun 2021 18:49:55 GMT",
"version": "v1"
},
{
"created": "Mon, 28 Jun 2021 09:17:27 GMT",
"version": "v2"
}
] |
2021-06-29
|
[
[
"Viehmann",
"Thomas",
""
]
] |
With the rise of machine learning and deep learning based applications in practice, monitoring, i.e. verifying that these operate within specification, has become an important practical problem. An important aspect of this monitoring is to check whether the inputs (or intermediates) have strayed from the distribution they were validated for, which can void the performance assurances obtained during testing. There are two common approaches for this. The, perhaps, more classical one is outlier detection or novelty detection, where, for a single input we ask whether it is an outlier, i.e. exceedingly unlikely to have originated from a reference distribution. The second, perhaps more recent approach, is to consider a larger number of inputs and compare its distribution to a reference distribution (e.g. sampled during testing). This is done under the label drift detection. In this work, we bridge the gap between outlier detection and drift detection through comparing a given number of inputs to an automatically chosen part of the reference distribution.
|
2310.09632
|
Juan Yepes
|
Juan D. Yepes, Daniel Raviv
|
Time-based Mapping of Space Using Visual Motion Invariants
|
3 pages
| null | null | null |
cs.CV eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper focuses on visual motion-based invariants that result in a
representation of 3D points in which the stationary environment remains
invariant, ensuring shape constancy. This is achieved even as the images
undergo constant change due to camera motion. Nonlinear functions of measurable
optical flow, which are related to geometric 3D invariants, are utilized to
create a novel representation. We refer to the resulting optical flow-based
invariants as 'Time-Clearance' and the well-known 'Time-to-Contact' (TTC).
Since these invariants remain constant over time, it becomes straightforward to
detect moving points that do not adhere to the expected constancy. We present
simulations of a camera moving relative to a 3D object, snapshots of its
projected images captured by a rectilinearly moving camera, and the object as
it appears unchanged in the new domain over time. In addition, Unity-based
simulations demonstrate color-coded transformations of a projected 3D scene,
illustrating how moving objects can be readily identified. This representation
is straightforward, relying on simple optical flow functions. It requires only
one camera, and there is no need to determine the magnitude of the camera's
velocity vector. Furthermore, the representation is pixel-based, making it
suitable for parallel processing.
|
[
{
"created": "Sat, 14 Oct 2023 17:55:49 GMT",
"version": "v1"
}
] |
2023-10-17
|
[
[
"Yepes",
"Juan D.",
""
],
[
"Raviv",
"Daniel",
""
]
] |
This paper focuses on visual motion-based invariants that result in a representation of 3D points in which the stationary environment remains invariant, ensuring shape constancy. This is achieved even as the images undergo constant change due to camera motion. Nonlinear functions of measurable optical flow, which are related to geometric 3D invariants, are utilized to create a novel representation. We refer to the resulting optical flow-based invariants as 'Time-Clearance' and the well-known 'Time-to-Contact' (TTC). Since these invariants remain constant over time, it becomes straightforward to detect moving points that do not adhere to the expected constancy. We present simulations of a camera moving relative to a 3D object, snapshots of its projected images captured by a rectilinearly moving camera, and the object as it appears unchanged in the new domain over time. In addition, Unity-based simulations demonstrate color-coded transformations of a projected 3D scene, illustrating how moving objects can be readily identified. This representation is straightforward, relying on simple optical flow functions. It requires only one camera, and there is no need to determine the magnitude of the camera's velocity vector. Furthermore, the representation is pixel-based, making it suitable for parallel processing.
|
1006.2691
|
Eswar Karthikeyan
|
S. Ganesh, R. Amutha
|
Real Time and Energy Efficient Transport Protocol for Wireless Sensor
Networks
| null | null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Reliable transport protocols such as TCP are tuned to perform well in
traditional networks where packet losses occur mostly because of congestion.
Many applications of wireless sensor networks are useful only when connected to
an external network. Previous research on transport layer protocols for sensor
networks has focused on designing protocols specifically targeted for sensor
networks. The deployment of TCP/IP in sensor networks would, however, enable
direct connection between the sensor network and external TCP/IP networks. In
this paper we focus on the performance of TCP in the context of wireless sensor
networks. TCP is known to exhibit poor performance in wireless environments,
both in terms of throughput and energy efficiency. To overcome these problems
we introduce a mechanism called TCP Segment Caching .We show by simulation that
TCP Segment Caching significantly improves TCP Performance so that TCP can be
useful e en in wireless sensor
|
[
{
"created": "Mon, 14 Jun 2010 12:26:02 GMT",
"version": "v1"
}
] |
2010-06-15
|
[
[
"Ganesh",
"S.",
""
],
[
"Amutha",
"R.",
""
]
] |
Reliable transport protocols such as TCP are tuned to perform well in traditional networks where packet losses occur mostly because of congestion. Many applications of wireless sensor networks are useful only when connected to an external network. Previous research on transport layer protocols for sensor networks has focused on designing protocols specifically targeted for sensor networks. The deployment of TCP/IP in sensor networks would, however, enable direct connection between the sensor network and external TCP/IP networks. In this paper we focus on the performance of TCP in the context of wireless sensor networks. TCP is known to exhibit poor performance in wireless environments, both in terms of throughput and energy efficiency. To overcome these problems we introduce a mechanism called TCP Segment Caching .We show by simulation that TCP Segment Caching significantly improves TCP Performance so that TCP can be useful e en in wireless sensor
|
2407.07713
|
Ali Shibli
|
Ali Shibli, Tahar Zanouda
|
Data-Driven Radio Environment Map Estimation Using Graph Neural Networks
|
Accepted at the 17th International Workshop on Data Driven
Intelligence for Networks and Systems (DDINS) - IEEE International Conference
on Communications (ICC) 2024
| null | null | null |
cs.NI cs.LG eess.SP
|
http://creativecommons.org/licenses/by/4.0/
|
Radio Environment Maps (REMs) are crucial for numerous applications in
Telecom. The construction of accurate Radio Environment Maps (REMs) has become
an important and challenging topic in recent decades. In this paper, we present
a method to estimate REMs using Graph Neural Networks. This approach utilizes
both physical cell information and sparse geo-located signal strength
measurements to estimate REMs. The method first divides and encodes mobile
network coverage areas into a graph. Then, it inputs sparse geo-located signal
strength measurements, characterized by Reference Signal Received Power (RSRP)
and Reference Signal Received Quality (RSRQ) metrics, into a Graph Neural
Network Model to estimate REMs. The proposed architecture inherits the
advantages of a Graph Neural Network to capture the spatial dependencies of
network-wide coverage in contrast with network Radio Access Network node
locations and spatial proximity of known measurements.
|
[
{
"created": "Sun, 9 Jun 2024 00:17:33 GMT",
"version": "v1"
}
] |
2024-07-11
|
[
[
"Shibli",
"Ali",
""
],
[
"Zanouda",
"Tahar",
""
]
] |
Radio Environment Maps (REMs) are crucial for numerous applications in Telecom. The construction of accurate Radio Environment Maps (REMs) has become an important and challenging topic in recent decades. In this paper, we present a method to estimate REMs using Graph Neural Networks. This approach utilizes both physical cell information and sparse geo-located signal strength measurements to estimate REMs. The method first divides and encodes mobile network coverage areas into a graph. Then, it inputs sparse geo-located signal strength measurements, characterized by Reference Signal Received Power (RSRP) and Reference Signal Received Quality (RSRQ) metrics, into a Graph Neural Network Model to estimate REMs. The proposed architecture inherits the advantages of a Graph Neural Network to capture the spatial dependencies of network-wide coverage in contrast with network Radio Access Network node locations and spatial proximity of known measurements.
|
1802.03796
|
Daphna Weinshall
|
Daphna Weinshall, Gad Cohen and Dan Amir
|
Curriculum Learning by Transfer Learning: Theory and Experiments with
Deep Networks
|
ICML 2018
|
Proceedings: 35th International Conference on Machine Learning
(ICML), oral, Stockholm Sweden, July 2018
| null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We provide theoretical investigation of curriculum learning in the context of
stochastic gradient descent when optimizing the convex linear regression loss.
We prove that the rate of convergence of an ideal curriculum learning method is
monotonically increasing with the difficulty of the examples. Moreover, among
all equally difficult points, convergence is faster when using points which
incur higher loss with respect to the current hypothesis. We then analyze
curriculum learning in the context of training a CNN. We describe a method
which infers the curriculum by way of transfer learning from another network,
pre-trained on a different task. While this approach can only approximate the
ideal curriculum, we observe empirically similar behavior to the one predicted
by the theory, namely, a significant boost in convergence speed at the
beginning of training. When the task is made more difficult, improvement in
generalization performance is also observed. Finally, curriculum learning
exhibits robustness against unfavorable conditions such as excessive
regularization.
|
[
{
"created": "Sun, 11 Feb 2018 19:24:47 GMT",
"version": "v1"
},
{
"created": "Fri, 20 Apr 2018 13:53:21 GMT",
"version": "v2"
},
{
"created": "Tue, 22 May 2018 15:20:06 GMT",
"version": "v3"
},
{
"created": "Fri, 8 Jun 2018 18:04:50 GMT",
"version": "v4"
}
] |
2023-12-29
|
[
[
"Weinshall",
"Daphna",
""
],
[
"Cohen",
"Gad",
""
],
[
"Amir",
"Dan",
""
]
] |
We provide theoretical investigation of curriculum learning in the context of stochastic gradient descent when optimizing the convex linear regression loss. We prove that the rate of convergence of an ideal curriculum learning method is monotonically increasing with the difficulty of the examples. Moreover, among all equally difficult points, convergence is faster when using points which incur higher loss with respect to the current hypothesis. We then analyze curriculum learning in the context of training a CNN. We describe a method which infers the curriculum by way of transfer learning from another network, pre-trained on a different task. While this approach can only approximate the ideal curriculum, we observe empirically similar behavior to the one predicted by the theory, namely, a significant boost in convergence speed at the beginning of training. When the task is made more difficult, improvement in generalization performance is also observed. Finally, curriculum learning exhibits robustness against unfavorable conditions such as excessive regularization.
|
2110.06006
|
Mahdi Abolfazli Esfahani
|
Mahdi Abolfazli Esfahani, Han Wang
|
Robust Glare Detection: Review, Analysis, and Dataset Release
| null | null | null | null |
cs.RO cs.AI eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Sun Glare widely exists in the images captured by unmanned ground and aerial
vehicles performing in outdoor environments. The existence of such artifacts in
images will result in wrong feature extraction and failure of autonomous
systems. Humans will try to adapt their view once they observe a glare
(especially when driving), and this behavior is an essential requirement for
the next generation of autonomous vehicles. The source of glare is not limited
to the sun, and glare can be seen in the images captured during the nighttime
and in indoor environments, which is due to the presence of different light
sources; reflective surfaces also influence the generation of such artifacts.
The glare's visual characteristics are different on images captured by various
cameras and depend on several factors such as the camera's shutter speed and
exposure level. Hence, it is challenging to introduce a general - robust and
accurate - algorithm for glare detection that can perform well in various
captured images. This research aims to introduce the first dataset for glare
detection, which includes images captured by different cameras. Besides, the
effect of multiple image representations and their combination in glare
detection is examined using the proposed deep network architecture. The
released dataset is available at https://github.com/maesfahani/glaredetection
|
[
{
"created": "Tue, 12 Oct 2021 13:46:33 GMT",
"version": "v1"
},
{
"created": "Wed, 13 Oct 2021 12:47:50 GMT",
"version": "v2"
}
] |
2021-10-14
|
[
[
"Esfahani",
"Mahdi Abolfazli",
""
],
[
"Wang",
"Han",
""
]
] |
Sun Glare widely exists in the images captured by unmanned ground and aerial vehicles performing in outdoor environments. The existence of such artifacts in images will result in wrong feature extraction and failure of autonomous systems. Humans will try to adapt their view once they observe a glare (especially when driving), and this behavior is an essential requirement for the next generation of autonomous vehicles. The source of glare is not limited to the sun, and glare can be seen in the images captured during the nighttime and in indoor environments, which is due to the presence of different light sources; reflective surfaces also influence the generation of such artifacts. The glare's visual characteristics are different on images captured by various cameras and depend on several factors such as the camera's shutter speed and exposure level. Hence, it is challenging to introduce a general - robust and accurate - algorithm for glare detection that can perform well in various captured images. This research aims to introduce the first dataset for glare detection, which includes images captured by different cameras. Besides, the effect of multiple image representations and their combination in glare detection is examined using the proposed deep network architecture. The released dataset is available at https://github.com/maesfahani/glaredetection
|
2405.09061
|
Tsuyoshi Id\'e
|
Tsuyoshi Id\'e, Jokin Labaien, and Pin-Yu Chen
|
Improving Transformers using Faithful Positional Encoding
|
arXiv admin note: text overlap with arXiv:2305.17149
| null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
We propose a new positional encoding method for a neural network architecture
called the Transformer. Unlike the standard sinusoidal positional encoding, our
approach is based on solid mathematical grounds and has a guarantee of not
losing information about the positional order of the input sequence. We show
that the new encoding approach systematically improves the prediction
performance in the time-series classification task.
|
[
{
"created": "Wed, 15 May 2024 03:17:30 GMT",
"version": "v1"
},
{
"created": "Thu, 16 May 2024 06:26:43 GMT",
"version": "v2"
}
] |
2024-05-17
|
[
[
"Idé",
"Tsuyoshi",
""
],
[
"Labaien",
"Jokin",
""
],
[
"Chen",
"Pin-Yu",
""
]
] |
We propose a new positional encoding method for a neural network architecture called the Transformer. Unlike the standard sinusoidal positional encoding, our approach is based on solid mathematical grounds and has a guarantee of not losing information about the positional order of the input sequence. We show that the new encoding approach systematically improves the prediction performance in the time-series classification task.
|
2312.10674
|
Fangjun Liu
|
Ran Chen, Xingjian Yi, Jing Zhao, Yueheng He, Bainian Chen, Xueqi Yao,
Fangjun Liu, Haoran Li, Zeke Lian
|
A Framework of Full-Process Generation Design for Park Green Spaces
Based on Remote Sensing Segmentation-GAN-Diffusion
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The development of generative design driven by artificial intelligence
algorithms is speedy. There are two research gaps in the current research: 1)
Most studies only focus on the relationship between design elements and pay
little attention to the external information of the site; 2) GAN and other
traditional generative algorithms generate results with low resolution and
insufficient details. To address these two problems, we integrate GAN, Stable
diffusion multimodal large-scale image pre-training model to construct a
full-process park generative design method: 1) First, construct a
high-precision remote sensing object extraction system for automated extraction
of urban environmental information; 2) Secondly, use GAN to construct a park
design generation system based on the external environment, which can quickly
infer and generate design schemes from urban environmental information; 3)
Finally, introduce Stable Diffusion to optimize the design plan, fill in
details, and expand the resolution of the plan by 64 times. This method can
achieve a fully unmanned design automation workflow. The research results show
that: 1) The relationship between the inside and outside of the site will
affect the algorithm generation results. 2) Compared with traditional GAN
algorithms, Stable diffusion significantly improve the information richness of
the generated results.
|
[
{
"created": "Sun, 17 Dec 2023 10:16:47 GMT",
"version": "v1"
}
] |
2023-12-19
|
[
[
"Chen",
"Ran",
""
],
[
"Yi",
"Xingjian",
""
],
[
"Zhao",
"Jing",
""
],
[
"He",
"Yueheng",
""
],
[
"Chen",
"Bainian",
""
],
[
"Yao",
"Xueqi",
""
],
[
"Liu",
"Fangjun",
""
],
[
"Li",
"Haoran",
""
],
[
"Lian",
"Zeke",
""
]
] |
The development of generative design driven by artificial intelligence algorithms is speedy. There are two research gaps in the current research: 1) Most studies only focus on the relationship between design elements and pay little attention to the external information of the site; 2) GAN and other traditional generative algorithms generate results with low resolution and insufficient details. To address these two problems, we integrate GAN, Stable diffusion multimodal large-scale image pre-training model to construct a full-process park generative design method: 1) First, construct a high-precision remote sensing object extraction system for automated extraction of urban environmental information; 2) Secondly, use GAN to construct a park design generation system based on the external environment, which can quickly infer and generate design schemes from urban environmental information; 3) Finally, introduce Stable Diffusion to optimize the design plan, fill in details, and expand the resolution of the plan by 64 times. This method can achieve a fully unmanned design automation workflow. The research results show that: 1) The relationship between the inside and outside of the site will affect the algorithm generation results. 2) Compared with traditional GAN algorithms, Stable diffusion significantly improve the information richness of the generated results.
|
2108.08790
|
Vignesh Nanda Kumar
|
Vignesh Nanda Kumar and Narayanan U Edakunni
|
Simple is better: Making Decision Trees faster using random sampling
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In recent years, gradient boosted decision trees have become popular in
building robust machine learning models on big data. The primary technique that
has enabled these algorithms success has been distributing the computation
while building the decision trees. A distributed decision tree building, in
turn, has been enabled by building quantiles of the big datasets and choosing
the candidate split points from these quantile sets. In XGBoost, for instance,
a sophisticated quantile building algorithm is employed to identify the
candidate split points for the decision trees. This method is often projected
to yield better results when the computation is distributed. In this paper, we
dispel the notion that these methods provide more accurate and scalable methods
for building decision trees in a distributed manner. In a significant
contribution, we show theoretically and empirically that choosing the split
points uniformly at random provides the same or even better performance in
terms of accuracy and computational efficiency. Hence, a simple random
selection of points suffices for decision tree building compared to more
sophisticated methods.
|
[
{
"created": "Thu, 19 Aug 2021 17:00:21 GMT",
"version": "v1"
}
] |
2021-08-20
|
[
[
"Kumar",
"Vignesh Nanda",
""
],
[
"Edakunni",
"Narayanan U",
""
]
] |
In recent years, gradient boosted decision trees have become popular in building robust machine learning models on big data. The primary technique that has enabled these algorithms success has been distributing the computation while building the decision trees. A distributed decision tree building, in turn, has been enabled by building quantiles of the big datasets and choosing the candidate split points from these quantile sets. In XGBoost, for instance, a sophisticated quantile building algorithm is employed to identify the candidate split points for the decision trees. This method is often projected to yield better results when the computation is distributed. In this paper, we dispel the notion that these methods provide more accurate and scalable methods for building decision trees in a distributed manner. In a significant contribution, we show theoretically and empirically that choosing the split points uniformly at random provides the same or even better performance in terms of accuracy and computational efficiency. Hence, a simple random selection of points suffices for decision tree building compared to more sophisticated methods.
|
1412.0223
|
Peng Cheng
|
Peng Cheng, Xiang Lian, Zhao Chen, Rui Fu, Lei Chen, Jinsong Han,
Jizhong Zhao
|
Reliable Diversity-Based Spatial Crowdsourcing by Moving Workers
|
16 pages
| null |
10.14778/2794367.2794372
| null |
cs.DB
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With the rapid development of mobile devices and the crowdsourcig platforms,
the spatial crowdsourcing has attracted much attention from the database
community, specifically, spatial crowdsourcing refers to sending a
location-based request to workers according to their positions. In this paper,
we consider an important spatial crowdsourcing problem, namely reliable
diversity-based spatial crowdsourcing (RDB-SC), in which spatial tasks (such as
taking videos/photos of a landmark or firework shows, and checking whether or
not parking spaces are available) are time-constrained, and workers are moving
towards some directions. Our RDB-SC problem is to assign workers to spatial
tasks such that the completion reliability and the spatial/temporal diversities
of spatial tasks are maximized. We prove that the RDB-SC problem is NP-hard and
intractable. Thus, we propose three effective approximation approaches,
including greedy, sampling, and divide-and-conquer algorithms. In order to
improve the efficiency, we also design an effective cost-model-based index,
which can dynamically maintain moving workers and spatial tasks with low cost,
and efficiently facilitate the retrieval of RDB-SC answers. Through extensive
experiments, we demonstrate the efficiency and effectiveness of our proposed
approaches over both real and synthetic data sets.
|
[
{
"created": "Sun, 30 Nov 2014 15:06:53 GMT",
"version": "v1"
},
{
"created": "Sun, 1 Mar 2015 08:26:38 GMT",
"version": "v2"
},
{
"created": "Sat, 9 May 2015 02:18:23 GMT",
"version": "v3"
},
{
"created": "Mon, 22 Jun 2015 01:23:23 GMT",
"version": "v4"
},
{
"created": "Tue, 10 Nov 2015 14:56:18 GMT",
"version": "v5"
}
] |
2016-10-27
|
[
[
"Cheng",
"Peng",
""
],
[
"Lian",
"Xiang",
""
],
[
"Chen",
"Zhao",
""
],
[
"Fu",
"Rui",
""
],
[
"Chen",
"Lei",
""
],
[
"Han",
"Jinsong",
""
],
[
"Zhao",
"Jizhong",
""
]
] |
With the rapid development of mobile devices and the crowdsourcig platforms, the spatial crowdsourcing has attracted much attention from the database community, specifically, spatial crowdsourcing refers to sending a location-based request to workers according to their positions. In this paper, we consider an important spatial crowdsourcing problem, namely reliable diversity-based spatial crowdsourcing (RDB-SC), in which spatial tasks (such as taking videos/photos of a landmark or firework shows, and checking whether or not parking spaces are available) are time-constrained, and workers are moving towards some directions. Our RDB-SC problem is to assign workers to spatial tasks such that the completion reliability and the spatial/temporal diversities of spatial tasks are maximized. We prove that the RDB-SC problem is NP-hard and intractable. Thus, we propose three effective approximation approaches, including greedy, sampling, and divide-and-conquer algorithms. In order to improve the efficiency, we also design an effective cost-model-based index, which can dynamically maintain moving workers and spatial tasks with low cost, and efficiently facilitate the retrieval of RDB-SC answers. Through extensive experiments, we demonstrate the efficiency and effectiveness of our proposed approaches over both real and synthetic data sets.
|
1904.06683
|
Hiya Roy
|
Hiya Roy, Subhajit Chaudhury, Toshihiko Yamasaki, Danielle DeLatte,
Makiko Ohtake, Tatsuaki Hashimoto
|
Lunar surface image restoration using U-net based deep neural networks
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Image restoration is a technique that reconstructs a feasible estimate of the
original image from the noisy observation. In this paper, we present a U-Net
based deep neural network model to restore the missing pixels on the lunar
surface image in a context-aware fashion, which is often known as image
inpainting problem. We use the grayscale image of the lunar surface captured by
Multiband Imager (MI) onboard Kaguya satellite for our experiments and the
results show that our method can reconstruct the lunar surface image with good
visual quality and improved PSNR values.
|
[
{
"created": "Sun, 14 Apr 2019 12:10:43 GMT",
"version": "v1"
}
] |
2019-04-16
|
[
[
"Roy",
"Hiya",
""
],
[
"Chaudhury",
"Subhajit",
""
],
[
"Yamasaki",
"Toshihiko",
""
],
[
"DeLatte",
"Danielle",
""
],
[
"Ohtake",
"Makiko",
""
],
[
"Hashimoto",
"Tatsuaki",
""
]
] |
Image restoration is a technique that reconstructs a feasible estimate of the original image from the noisy observation. In this paper, we present a U-Net based deep neural network model to restore the missing pixels on the lunar surface image in a context-aware fashion, which is often known as image inpainting problem. We use the grayscale image of the lunar surface captured by Multiband Imager (MI) onboard Kaguya satellite for our experiments and the results show that our method can reconstruct the lunar surface image with good visual quality and improved PSNR values.
|
2102.04043
|
Chao-Yu Chen
|
Cheng-Yu Pai and Chao-Yu Chen
|
Two-Dimensional Golay Complementary Array Sets from Generalized Boolean
Functions
|
Submitted to IEEE Transactions on Information Theory
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The one-dimensional (1-D) Golay complementary set (GCS) has many well-known
properties and has been widely employed in engineering. The concept of 1-D GCS
can be extended to the two-dimensional (2-D) Golay complementary array set
(GCAS) where the 2-D aperiodic autocorrelation of constituent arrays sum to
zero except for the 2-D zero shift. The 2-D GCAS includes the 2-D Golay
complementary array pair (GCAP) as a special case when the set size is 2. In
this paper, 2-D generalized Boolean functions are introduced and novel
constructions of 2-D GCAPs, 2-D GCASs, and 2-D Golay complementary array mates
based on generalized Boolean functions are proposed. Explicit expressions of
2-D Boolean functions for 2-D GCAPs and 2-D GCASs are given. Therefore, they
are all direct constructions without the aid of other existing 1-D or 2-D
sequences. Moreover, for the column sequences and row sequences of the
constructed 2-D GCAPs, their peak-to-average power ratio (PAPR) properties are
also investigated.
|
[
{
"created": "Mon, 8 Feb 2021 07:59:47 GMT",
"version": "v1"
}
] |
2021-02-09
|
[
[
"Pai",
"Cheng-Yu",
""
],
[
"Chen",
"Chao-Yu",
""
]
] |
The one-dimensional (1-D) Golay complementary set (GCS) has many well-known properties and has been widely employed in engineering. The concept of 1-D GCS can be extended to the two-dimensional (2-D) Golay complementary array set (GCAS) where the 2-D aperiodic autocorrelation of constituent arrays sum to zero except for the 2-D zero shift. The 2-D GCAS includes the 2-D Golay complementary array pair (GCAP) as a special case when the set size is 2. In this paper, 2-D generalized Boolean functions are introduced and novel constructions of 2-D GCAPs, 2-D GCASs, and 2-D Golay complementary array mates based on generalized Boolean functions are proposed. Explicit expressions of 2-D Boolean functions for 2-D GCAPs and 2-D GCASs are given. Therefore, they are all direct constructions without the aid of other existing 1-D or 2-D sequences. Moreover, for the column sequences and row sequences of the constructed 2-D GCAPs, their peak-to-average power ratio (PAPR) properties are also investigated.
|
1005.3224
|
Amparo F\'uster-Sabater
|
Amparo F\'uster-Sabater
|
Cellular Automata in Stream Ciphers
|
26 pages, 1 figure
|
Contemporary Mathematics, Volume 477, pp. 1-20, 2009
| null | null |
cs.CR cs.DM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A wide family of nonlinear sequence generators, the so-called
clock-controlled shrinking generators, has been analyzed and identified with a
subset of linear cellular automata. The algorithm that converts the given
generator into a linear model based on automata is very simple and can be
applied in a range of practical interest. Due to the linearity of these
automata as well as the characteristics of this class of generators, a
cryptanalytic approach can be proposed. Linear cellular structures easily model
keystream generators with application in stream cipher cryptography.
|
[
{
"created": "Tue, 18 May 2010 15:11:19 GMT",
"version": "v1"
}
] |
2010-05-19
|
[
[
"Fúster-Sabater",
"Amparo",
""
]
] |
A wide family of nonlinear sequence generators, the so-called clock-controlled shrinking generators, has been analyzed and identified with a subset of linear cellular automata. The algorithm that converts the given generator into a linear model based on automata is very simple and can be applied in a range of practical interest. Due to the linearity of these automata as well as the characteristics of this class of generators, a cryptanalytic approach can be proposed. Linear cellular structures easily model keystream generators with application in stream cipher cryptography.
|
2211.16191
|
Fang Peng
|
Fang Peng, Xiaoshan Yang, Linhui Xiao, Yaowei Wang, Changsheng Xu
|
SgVA-CLIP: Semantic-guided Visual Adapting of Vision-Language Models for
Few-shot Image Classification
| null | null | null | null |
cs.CV cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Although significant progress has been made in few-shot learning, most of
existing few-shot image classification methods require supervised pre-training
on a large amount of samples of base classes, which limits their generalization
ability in real world application. Recently, large-scale Vision-Language
Pre-trained models (VLPs) have been gaining increasing attention in few-shot
learning because they can provide a new paradigm for transferable visual
representation learning with easily available text on the Web. However, the
VLPs may neglect detailed visual information that is difficult to describe by
language sentences, but important for learning an effective classifier to
distinguish different images. To address the above problem, we propose a new
framework, named Semantic-guided Visual Adapting (SgVA), which can effectively
extend vision-language pre-trained models to produce discriminative adapted
visual features by comprehensively using an implicit knowledge distillation, a
vision-specific contrastive loss, and a cross-modal contrastive loss. The
implicit knowledge distillation is designed to transfer the fine-grained
cross-modal knowledge to guide the updating of the vision adapter.
State-of-the-art results on 13 datasets demonstrate that the adapted visual
features can well complement the cross-modal features to improve few-shot image
classification.
|
[
{
"created": "Mon, 28 Nov 2022 14:58:15 GMT",
"version": "v1"
},
{
"created": "Fri, 20 Jan 2023 13:56:39 GMT",
"version": "v2"
}
] |
2023-01-23
|
[
[
"Peng",
"Fang",
""
],
[
"Yang",
"Xiaoshan",
""
],
[
"Xiao",
"Linhui",
""
],
[
"Wang",
"Yaowei",
""
],
[
"Xu",
"Changsheng",
""
]
] |
Although significant progress has been made in few-shot learning, most of existing few-shot image classification methods require supervised pre-training on a large amount of samples of base classes, which limits their generalization ability in real world application. Recently, large-scale Vision-Language Pre-trained models (VLPs) have been gaining increasing attention in few-shot learning because they can provide a new paradigm for transferable visual representation learning with easily available text on the Web. However, the VLPs may neglect detailed visual information that is difficult to describe by language sentences, but important for learning an effective classifier to distinguish different images. To address the above problem, we propose a new framework, named Semantic-guided Visual Adapting (SgVA), which can effectively extend vision-language pre-trained models to produce discriminative adapted visual features by comprehensively using an implicit knowledge distillation, a vision-specific contrastive loss, and a cross-modal contrastive loss. The implicit knowledge distillation is designed to transfer the fine-grained cross-modal knowledge to guide the updating of the vision adapter. State-of-the-art results on 13 datasets demonstrate that the adapted visual features can well complement the cross-modal features to improve few-shot image classification.
|
1411.2153
|
Simone Cirillo
|
Simone Cirillo, Stefan Lloyd, Peter Nordin
|
Evolving intraday foreign exchange trading strategies utilizing multiple
instruments price series
|
15 pages, 10 figures, 9 tables
| null | null | null |
cs.NE q-fin.TR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a Genetic Programming architecture for the generation of foreign
exchange trading strategies. The system's principal features are the evolution
of free-form strategies which do not rely on any prior models and the
utilization of price series from multiple instruments as input data. This
latter feature constitutes an innovation with respect to previous works
documented in literature. In this article we utilize Open, High, Low, Close bar
data at a 5 minutes frequency for the AUD.USD, EUR.USD, GBP.USD and USD.JPY
currency pairs. We will test the implementation analyzing the in-sample and
out-of-sample performance of strategies for trading the USD.JPY obtained across
multiple algorithm runs. We will also evaluate the differences between
strategies selected according to two different criteria: one relies on the
fitness obtained on the training set only, the second one makes use of an
additional validation dataset. Strategy activity and trade accuracy are
remarkably stable between in and out of sample results. From a profitability
aspect, the two criteria both result in strategies successful on out-of-sample
data but exhibiting different characteristics. The overall best performing
out-of-sample strategy achieves a yearly return of 19%.
|
[
{
"created": "Sat, 8 Nov 2014 19:22:55 GMT",
"version": "v1"
}
] |
2014-11-11
|
[
[
"Cirillo",
"Simone",
""
],
[
"Lloyd",
"Stefan",
""
],
[
"Nordin",
"Peter",
""
]
] |
We propose a Genetic Programming architecture for the generation of foreign exchange trading strategies. The system's principal features are the evolution of free-form strategies which do not rely on any prior models and the utilization of price series from multiple instruments as input data. This latter feature constitutes an innovation with respect to previous works documented in literature. In this article we utilize Open, High, Low, Close bar data at a 5 minutes frequency for the AUD.USD, EUR.USD, GBP.USD and USD.JPY currency pairs. We will test the implementation analyzing the in-sample and out-of-sample performance of strategies for trading the USD.JPY obtained across multiple algorithm runs. We will also evaluate the differences between strategies selected according to two different criteria: one relies on the fitness obtained on the training set only, the second one makes use of an additional validation dataset. Strategy activity and trade accuracy are remarkably stable between in and out of sample results. From a profitability aspect, the two criteria both result in strategies successful on out-of-sample data but exhibiting different characteristics. The overall best performing out-of-sample strategy achieves a yearly return of 19%.
|
2206.08921
|
Lawrence Yunliang Chen
|
Lawrence Yunliang Chen, Huang Huang, Ellen Novoseller, Daniel Seita,
Jeffrey Ichnowski, Michael Laskey, Richard Cheng, Thomas Kollar, Ken Goldberg
|
Efficiently Learning Single-Arm Fling Motions to Smooth Garments
|
Accepted to 2022 International Symposium on Robotics Research (ISRR)
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent work has shown that 2-arm "fling" motions can be effective for garment
smoothing. We consider single-arm fling motions. Unlike 2-arm fling motions,
which require little robot trajectory parameter tuning, single-arm fling
motions are very sensitive to trajectory parameters. We consider a single 6-DOF
robot arm that learns fling trajectories to achieve high garment coverage.
Given a garment grasp point, the robot explores different parameterized fling
trajectories in physical experiments. To improve learning efficiency, we
propose a coarse-to-fine learning method that first uses a multi-armed bandit
(MAB) framework to efficiently find a candidate fling action, which it then
refines via a continuous optimization method. Further, we propose novel
training and execution-time stopping criteria based on fling outcome
uncertainty; the training-time stopping criterion increases data efficiency
while the execution-time stopping criteria leverage repeated fling actions to
increase performance. Compared to baselines, the proposed method significantly
accelerates learning. Moreover, with prior experience on similar garments
collected through self-supervision, the MAB learning time for a new garment is
reduced by up to 87%. We evaluate on 36 real garments: towels, T-shirts,
long-sleeve shirts, dresses, sweat pants, and jeans. Results suggest that using
prior experience, a robot requires under 30 minutes to learn a fling action for
a novel garment that achieves 60-94% coverage.
|
[
{
"created": "Fri, 17 Jun 2022 17:57:32 GMT",
"version": "v1"
},
{
"created": "Sat, 24 Sep 2022 09:13:41 GMT",
"version": "v2"
}
] |
2022-09-27
|
[
[
"Chen",
"Lawrence Yunliang",
""
],
[
"Huang",
"Huang",
""
],
[
"Novoseller",
"Ellen",
""
],
[
"Seita",
"Daniel",
""
],
[
"Ichnowski",
"Jeffrey",
""
],
[
"Laskey",
"Michael",
""
],
[
"Cheng",
"Richard",
""
],
[
"Kollar",
"Thomas",
""
],
[
"Goldberg",
"Ken",
""
]
] |
Recent work has shown that 2-arm "fling" motions can be effective for garment smoothing. We consider single-arm fling motions. Unlike 2-arm fling motions, which require little robot trajectory parameter tuning, single-arm fling motions are very sensitive to trajectory parameters. We consider a single 6-DOF robot arm that learns fling trajectories to achieve high garment coverage. Given a garment grasp point, the robot explores different parameterized fling trajectories in physical experiments. To improve learning efficiency, we propose a coarse-to-fine learning method that first uses a multi-armed bandit (MAB) framework to efficiently find a candidate fling action, which it then refines via a continuous optimization method. Further, we propose novel training and execution-time stopping criteria based on fling outcome uncertainty; the training-time stopping criterion increases data efficiency while the execution-time stopping criteria leverage repeated fling actions to increase performance. Compared to baselines, the proposed method significantly accelerates learning. Moreover, with prior experience on similar garments collected through self-supervision, the MAB learning time for a new garment is reduced by up to 87%. We evaluate on 36 real garments: towels, T-shirts, long-sleeve shirts, dresses, sweat pants, and jeans. Results suggest that using prior experience, a robot requires under 30 minutes to learn a fling action for a novel garment that achieves 60-94% coverage.
|
2205.13858
|
Timothee Mickus
|
Timothee Mickus and Kees van Deemter and Mathieu Constant and Denis
Paperno
|
Semeval-2022 Task 1: CODWOE -- Comparing Dictionaries and Word
Embeddings
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Word embeddings have advanced the state of the art in NLP across numerous
tasks. Understanding the contents of dense neural representations is of utmost
interest to the computational semantics community. We propose to focus on
relating these opaque word vectors with human-readable definitions, as found in
dictionaries. This problem naturally divides into two subtasks: converting
definitions into embeddings, and converting embeddings into definitions. This
task was conducted in a multilingual setting, using comparable sets of
embeddings trained homogeneously.
|
[
{
"created": "Fri, 27 May 2022 09:40:33 GMT",
"version": "v1"
}
] |
2022-05-30
|
[
[
"Mickus",
"Timothee",
""
],
[
"van Deemter",
"Kees",
""
],
[
"Constant",
"Mathieu",
""
],
[
"Paperno",
"Denis",
""
]
] |
Word embeddings have advanced the state of the art in NLP across numerous tasks. Understanding the contents of dense neural representations is of utmost interest to the computational semantics community. We propose to focus on relating these opaque word vectors with human-readable definitions, as found in dictionaries. This problem naturally divides into two subtasks: converting definitions into embeddings, and converting embeddings into definitions. This task was conducted in a multilingual setting, using comparable sets of embeddings trained homogeneously.
|
1912.03926
|
Gabriel Moreau
|
Gabriel Moreau (LEGI), Bernard Maire-Amiot, David Gras (MOY1100),
Herv\'e Colasuonno (G2ELab), Julien Bamberger (G2ELab), Aur\'elien Minet
(EPHE), Alain P\'ean (C2N), Marie-Goretti Dejean (CIRM)
|
Why I killed my copper -- Highlights about the FTTO in the ESR
|
Vid{\'e}o
https://replay.jres.org/videos/watch/6de5f575-9da1-4cb7-82af-f3f90aca9b6e
Congr\`es , in French, JRES : Les Journ\'ees R\'eseaux de l'Enseignement et
de la Recherche, RENATER, Dec 2019, Dijon, France
| null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
FTTO means Fiber To The Office, in reference to FTTH (Fibre To The Home),
deployed in France for individuals. The principle of FTTO is to cable a
building totally in fibre optic, to remove as much copper cabling as possible
and install microswitches in each office (duct or adjacent), as near the
machines as possible. Users are still connected with standard RJ45 copper
wiring. Through questions and answers, we will highlight the reasons why FFTO
is a controlled and future-oriented technology.Over the last six years, several
building projects within the perimeter of Higher Education and Research have
chosen this technology and have seen or will see the light of day. Depending on
the project, different topologies and technologies are possible. What is the
feedback after these years? Is the result as expected? How is the solution
experienced on a day-to-day basis? What security, how is a large switch
assembly configured and maintained, what high availability is possible? How is
Wi-Fi, IP telephony and all PoE devices integrated? Does FTTO contribute to
eco-consumption? How can a FTTO call for tender be set up for a project, what
are the essential elements to be included and what are the errors to be avoided
at all costs? In the future, what is the life expectancy for its infrastructure
and what speeds can be envisaged? The RESINFO FTTO Group is working to provide
clear answers to all these questions and to share its experience with the
community.
|
[
{
"created": "Mon, 9 Dec 2019 09:48:28 GMT",
"version": "v1"
}
] |
2019-12-10
|
[
[
"Moreau",
"Gabriel",
"",
"LEGI"
],
[
"Maire-Amiot",
"Bernard",
"",
"MOY1100"
],
[
"Gras",
"David",
"",
"MOY1100"
],
[
"Colasuonno",
"Hervé",
"",
"G2ELab"
],
[
"Bamberger",
"Julien",
"",
"G2ELab"
],
[
"Minet",
"Aurélien",
"",
"EPHE"
],
[
"Péan",
"Alain",
"",
"C2N"
],
[
"Dejean",
"Marie-Goretti",
"",
"CIRM"
]
] |
FTTO means Fiber To The Office, in reference to FTTH (Fibre To The Home), deployed in France for individuals. The principle of FTTO is to cable a building totally in fibre optic, to remove as much copper cabling as possible and install microswitches in each office (duct or adjacent), as near the machines as possible. Users are still connected with standard RJ45 copper wiring. Through questions and answers, we will highlight the reasons why FFTO is a controlled and future-oriented technology.Over the last six years, several building projects within the perimeter of Higher Education and Research have chosen this technology and have seen or will see the light of day. Depending on the project, different topologies and technologies are possible. What is the feedback after these years? Is the result as expected? How is the solution experienced on a day-to-day basis? What security, how is a large switch assembly configured and maintained, what high availability is possible? How is Wi-Fi, IP telephony and all PoE devices integrated? Does FTTO contribute to eco-consumption? How can a FTTO call for tender be set up for a project, what are the essential elements to be included and what are the errors to be avoided at all costs? In the future, what is the life expectancy for its infrastructure and what speeds can be envisaged? The RESINFO FTTO Group is working to provide clear answers to all these questions and to share its experience with the community.
|
2405.17618
|
Ju-Seung Byun
|
Ju-Seung Byun, Andrew Perrault
|
Symmetric Reinforcement Learning Loss for Robust Learning on Diverse
Tasks and Model Scales
| null | null | null | null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Reinforcement learning (RL) training is inherently unstable due to factors
such as moving targets and high gradient variance. Reinforcement Learning from
Human Feedback (RLHF) and Reinforcement Learning from AI Feedback (RLAIF) can
introduce additional difficulty. Differing preferences can complicate the
alignment process, and prediction errors in a trained reward model can become
more severe as the LLM generates unseen outputs. To enhance training
robustness, RL has adopted techniques from supervised learning, such as
ensembles and layer normalization. In this work, we improve the stability of RL
training by adapting the reverse cross entropy (RCE) from supervised learning
for noisy data to define a symmetric RL loss. We demonstrate performance
improvements across various tasks and scales. We conduct experiments in
discrete action tasks (Atari games) and continuous action space tasks (MuJoCo
benchmark and Box2D) using Symmetric A2C (SA2C) and Symmetric PPO (SPPO), with
and without added noise with especially notable performance in SPPO across
different hyperparameters. Furthermore, we validate the benefits of the
symmetric RL loss when using SPPO for large language models through improved
performance in RLHF tasks, such as IMDB positive sentiment sentiment and TL;DR
summarization tasks.
|
[
{
"created": "Mon, 27 May 2024 19:28:33 GMT",
"version": "v1"
},
{
"created": "Wed, 29 May 2024 04:19:00 GMT",
"version": "v2"
}
] |
2024-05-30
|
[
[
"Byun",
"Ju-Seung",
""
],
[
"Perrault",
"Andrew",
""
]
] |
Reinforcement learning (RL) training is inherently unstable due to factors such as moving targets and high gradient variance. Reinforcement Learning from Human Feedback (RLHF) and Reinforcement Learning from AI Feedback (RLAIF) can introduce additional difficulty. Differing preferences can complicate the alignment process, and prediction errors in a trained reward model can become more severe as the LLM generates unseen outputs. To enhance training robustness, RL has adopted techniques from supervised learning, such as ensembles and layer normalization. In this work, we improve the stability of RL training by adapting the reverse cross entropy (RCE) from supervised learning for noisy data to define a symmetric RL loss. We demonstrate performance improvements across various tasks and scales. We conduct experiments in discrete action tasks (Atari games) and continuous action space tasks (MuJoCo benchmark and Box2D) using Symmetric A2C (SA2C) and Symmetric PPO (SPPO), with and without added noise with especially notable performance in SPPO across different hyperparameters. Furthermore, we validate the benefits of the symmetric RL loss when using SPPO for large language models through improved performance in RLHF tasks, such as IMDB positive sentiment sentiment and TL;DR summarization tasks.
|
0709.0428
|
Noelle Carbonell
|
Suzanne Kieffer (INRIA Rocquencourt / INRIA Lorraine - LORIA),
No\"elle Carbonell (INRIA Rocquencourt / INRIA Lorraine - LORIA)
|
Oral messages improve visual search
|
4 pages
|
Dans Proceedings of ACM Working Conference on Advanced Visual
Interfaces - ACM Working Conference on Advanced Visual Interfaces (AVI 2006),
Venezia : Italie (2006)
| null | null |
cs.HC
| null |
Input multimodality combining speech and hand gestures has motivated numerous
usability studies. Contrastingly, issues relating to the design and ergonomic
evaluation of multimodal output messages combining speech with visual
modalities have not yet been addressed extensively. The experimental study
presented here addresses one of these issues. Its aim is to assess the actual
efficiency and usability of oral system messages including brief spatial
information for helping users to locate objects on crowded displays rapidly.
Target presentation mode, scene spatial structure and task difficulty were
chosen as independent variables. Two conditions were defined: the visual target
presentation mode (VP condition) and the multimodal target presentation mode
(MP condition). Each participant carried out two blocks of visual search tasks
(120 tasks per block, and one block per condition). Scene target presentation
mode, scene structure and task difficulty were found to be significant factors.
Multimodal target presentation proved to be more efficient than visual target
presentation. In addition, participants expressed very positive judgments on
multimodal target presentations which were preferred to visual presentations by
a majority of participants. Besides, the contribution of spatial messages to
visual search speed and accuracy was influenced by scene spatial structure and
task difficulty: (i) messages improved search efficiency to a lesser extent for
2D array layouts than for some other symmetrical layouts, although the use of
2D arrays for displaying pictures is currently prevailing; (ii) message
usefulness increased with task difficulty. Most of these results are
statistically significant.
|
[
{
"created": "Tue, 4 Sep 2007 13:27:33 GMT",
"version": "v1"
}
] |
2007-09-05
|
[
[
"Kieffer",
"Suzanne",
"",
"INRIA Rocquencourt / INRIA Lorraine - LORIA"
],
[
"Carbonell",
"Noëlle",
"",
"INRIA Rocquencourt / INRIA Lorraine - LORIA"
]
] |
Input multimodality combining speech and hand gestures has motivated numerous usability studies. Contrastingly, issues relating to the design and ergonomic evaluation of multimodal output messages combining speech with visual modalities have not yet been addressed extensively. The experimental study presented here addresses one of these issues. Its aim is to assess the actual efficiency and usability of oral system messages including brief spatial information for helping users to locate objects on crowded displays rapidly. Target presentation mode, scene spatial structure and task difficulty were chosen as independent variables. Two conditions were defined: the visual target presentation mode (VP condition) and the multimodal target presentation mode (MP condition). Each participant carried out two blocks of visual search tasks (120 tasks per block, and one block per condition). Scene target presentation mode, scene structure and task difficulty were found to be significant factors. Multimodal target presentation proved to be more efficient than visual target presentation. In addition, participants expressed very positive judgments on multimodal target presentations which were preferred to visual presentations by a majority of participants. Besides, the contribution of spatial messages to visual search speed and accuracy was influenced by scene spatial structure and task difficulty: (i) messages improved search efficiency to a lesser extent for 2D array layouts than for some other symmetrical layouts, although the use of 2D arrays for displaying pictures is currently prevailing; (ii) message usefulness increased with task difficulty. Most of these results are statistically significant.
|
1610.01495
|
Francesco Romano
|
Francesco Romano and Daniele Pucci and Silvio Traversaro and Francesco
Nori
|
The Static Center of Pressure Sensitivity: a further Criterion to assess
Contact Stability and Balancing Controllers
| null | null | null | null |
cs.RO math.OC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Legged locomotion has received increasing attention from the robotics
community. In this respect, contact stability plays a critical role in ensuring
that robots maintain balance, and it is a key element for balancing and walking
controllers. The Center of Pressure is a contact stability criterion that
defines a point that must be kept strictly inside the support polygon in order
to ensure postural stability. In this paper, we introduce the concept of the
sensitivity of the static center of pressure: roughly speaking, the rate of
change of the center of pressure with respect to the system equilibrium
configurations. This new concept can be used as an additional criterion to
assess the robustness of the contact stability. We show how the sensitivity of
the center of pressure can also be used as a metric to assess balancing
controllers by considering two state-of-the-art control strategies. The
analytical analysis is performed on a simplified model, and validated during
balancing tasks on the iCub humanoid robot.
|
[
{
"created": "Wed, 5 Oct 2016 16:02:24 GMT",
"version": "v1"
},
{
"created": "Mon, 29 May 2017 07:05:55 GMT",
"version": "v2"
}
] |
2017-05-30
|
[
[
"Romano",
"Francesco",
""
],
[
"Pucci",
"Daniele",
""
],
[
"Traversaro",
"Silvio",
""
],
[
"Nori",
"Francesco",
""
]
] |
Legged locomotion has received increasing attention from the robotics community. In this respect, contact stability plays a critical role in ensuring that robots maintain balance, and it is a key element for balancing and walking controllers. The Center of Pressure is a contact stability criterion that defines a point that must be kept strictly inside the support polygon in order to ensure postural stability. In this paper, we introduce the concept of the sensitivity of the static center of pressure: roughly speaking, the rate of change of the center of pressure with respect to the system equilibrium configurations. This new concept can be used as an additional criterion to assess the robustness of the contact stability. We show how the sensitivity of the center of pressure can also be used as a metric to assess balancing controllers by considering two state-of-the-art control strategies. The analytical analysis is performed on a simplified model, and validated during balancing tasks on the iCub humanoid robot.
|
1603.08631
|
Ghassem Tofighi
|
Saman Sarraf and Ghassem Tofighi
|
Classification of Alzheimer's Disease using fMRI Data and Deep Learning
Convolutional Neural Networks
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Over the past decade, machine learning techniques especially predictive
modeling and pattern recognition in biomedical sciences from drug delivery
system to medical imaging has become one of the important methods which are
assisting researchers to have deeper understanding of entire issue and to solve
complex medical problems. Deep learning is power learning machine learning
algorithm in classification while extracting high-level features. In this
paper, we used convolutional neural network to classify Alzheimer's brain from
normal healthy brain. The importance of classifying this kind of medical data
is to potentially develop a predict model or system in order to recognize the
type disease from normal subjects or to estimate the stage of the disease.
Classification of clinical data such as Alzheimer's disease has been always
challenging and most problematic part has been always selecting the most
discriminative features. Using Convolutional Neural Network (CNN) and the
famous architecture LeNet-5, we successfully classified functional MRI data of
Alzheimer's subjects from normal controls where the accuracy of test data on
trained data reached 96.85%. This experiment suggests us the shift and scale
invariant features extracted by CNN followed by deep learning classification is
most powerful method to distinguish clinical data from healthy data in fMRI.
This approach also enables us to expand our methodology to predict more
complicated systems.
|
[
{
"created": "Tue, 29 Mar 2016 04:30:07 GMT",
"version": "v1"
}
] |
2016-03-30
|
[
[
"Sarraf",
"Saman",
""
],
[
"Tofighi",
"Ghassem",
""
]
] |
Over the past decade, machine learning techniques especially predictive modeling and pattern recognition in biomedical sciences from drug delivery system to medical imaging has become one of the important methods which are assisting researchers to have deeper understanding of entire issue and to solve complex medical problems. Deep learning is power learning machine learning algorithm in classification while extracting high-level features. In this paper, we used convolutional neural network to classify Alzheimer's brain from normal healthy brain. The importance of classifying this kind of medical data is to potentially develop a predict model or system in order to recognize the type disease from normal subjects or to estimate the stage of the disease. Classification of clinical data such as Alzheimer's disease has been always challenging and most problematic part has been always selecting the most discriminative features. Using Convolutional Neural Network (CNN) and the famous architecture LeNet-5, we successfully classified functional MRI data of Alzheimer's subjects from normal controls where the accuracy of test data on trained data reached 96.85%. This experiment suggests us the shift and scale invariant features extracted by CNN followed by deep learning classification is most powerful method to distinguish clinical data from healthy data in fMRI. This approach also enables us to expand our methodology to predict more complicated systems.
|
1808.04495
|
Jialei Chen
|
Jialei Chen, Yujia Xie, Kan Wang, Zih Huei Wang, Geet Lahoti, Chuck
Zhang, Mani A Vannan, Ben Wang, Zhen Qian
|
Generative Invertible Networks (GIN): Pathophysiology-Interpretable
Feature Mapping and Virtual Patient Generation
| null | null |
10.1007/978-3-030-00928-1_61
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Machine learning methods play increasingly important roles in pre-procedural
planning for complex surgeries and interventions. Very often, however,
researchers find the historical records of emerging surgical techniques, such
as the transcatheter aortic valve replacement (TAVR), are highly scarce in
quantity. In this paper, we address this challenge by proposing novel
generative invertible networks (GIN) to select features and generate
high-quality virtual patients that may potentially serve as an additional data
source for machine learning. Combining a convolutional neural network (CNN) and
generative adversarial networks (GAN), GIN discovers the pathophysiologic
meaning of the feature space. Moreover, a test of predicting the surgical
outcome directly using the selected features results in a high accuracy of
81.55%, which suggests little pathophysiologic information has been lost while
conducting the feature selection. This demonstrates GIN can generate virtual
patients not only visually authentic but also pathophysiologically
interpretable.
|
[
{
"created": "Tue, 14 Aug 2018 00:18:33 GMT",
"version": "v1"
}
] |
2019-02-06
|
[
[
"Chen",
"Jialei",
""
],
[
"Xie",
"Yujia",
""
],
[
"Wang",
"Kan",
""
],
[
"Wang",
"Zih Huei",
""
],
[
"Lahoti",
"Geet",
""
],
[
"Zhang",
"Chuck",
""
],
[
"Vannan",
"Mani A",
""
],
[
"Wang",
"Ben",
""
],
[
"Qian",
"Zhen",
""
]
] |
Machine learning methods play increasingly important roles in pre-procedural planning for complex surgeries and interventions. Very often, however, researchers find the historical records of emerging surgical techniques, such as the transcatheter aortic valve replacement (TAVR), are highly scarce in quantity. In this paper, we address this challenge by proposing novel generative invertible networks (GIN) to select features and generate high-quality virtual patients that may potentially serve as an additional data source for machine learning. Combining a convolutional neural network (CNN) and generative adversarial networks (GAN), GIN discovers the pathophysiologic meaning of the feature space. Moreover, a test of predicting the surgical outcome directly using the selected features results in a high accuracy of 81.55%, which suggests little pathophysiologic information has been lost while conducting the feature selection. This demonstrates GIN can generate virtual patients not only visually authentic but also pathophysiologically interpretable.
|
1810.03711
|
Luigi Freda
|
Luigi Freda and Mario Gianni and Fiora Pirri
|
A Hybrid Approach for Trajectory Control Design
|
9 pages, 11 figures
| null | null | null |
cs.RO cs.LG stat.ML
|
http://creativecommons.org/licenses/by/4.0/
|
This work presents a methodology to design trajectory tracking feedback
control laws, which embed non-parametric statistical models, such as Gaussian
Processes (GPs). The aim is to minimize unmodeled dynamics such as undesired
slippages. The proposed approach has the benefit of avoiding complex
terramechanics analysis to directly estimate from data the robot dynamics on a
wide class of trajectories. Experiments in both real and simulated environments
prove that the proposed methodology is promising.
|
[
{
"created": "Mon, 8 Oct 2018 21:40:07 GMT",
"version": "v1"
},
{
"created": "Sat, 5 Jan 2019 10:15:13 GMT",
"version": "v2"
},
{
"created": "Sat, 19 Nov 2022 18:44:19 GMT",
"version": "v3"
}
] |
2022-11-22
|
[
[
"Freda",
"Luigi",
""
],
[
"Gianni",
"Mario",
""
],
[
"Pirri",
"Fiora",
""
]
] |
This work presents a methodology to design trajectory tracking feedback control laws, which embed non-parametric statistical models, such as Gaussian Processes (GPs). The aim is to minimize unmodeled dynamics such as undesired slippages. The proposed approach has the benefit of avoiding complex terramechanics analysis to directly estimate from data the robot dynamics on a wide class of trajectories. Experiments in both real and simulated environments prove that the proposed methodology is promising.
|
2006.13114
|
Preethi Lahoti
|
Preethi Lahoti, Alex Beutel, Jilin Chen, Kang Lee, Flavien Prost,
Nithum Thain, Xuezhi Wang, Ed H. Chi
|
Fairness without Demographics through Adversarially Reweighted Learning
|
To appear at 34th Conference on Neural Information Processing Systems
(NeurIPS 2020), Vancouver, Canada
| null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Much of the previous machine learning (ML) fairness literature assumes that
protected features such as race and sex are present in the dataset, and relies
upon them to mitigate fairness concerns. However, in practice factors like
privacy and regulation often preclude the collection of protected features, or
their use for training or inference, severely limiting the applicability of
traditional fairness research. Therefore we ask: How can we train an ML model
to improve fairness when we do not even know the protected group memberships?
In this work we address this problem by proposing Adversarially Reweighted
Learning (ARL). In particular, we hypothesize that non-protected features and
task labels are valuable for identifying fairness issues, and can be used to
co-train an adversarial reweighting approach for improving fairness. Our
results show that {ARL} improves Rawlsian Max-Min fairness, with notable AUC
improvements for worst-case protected groups in multiple datasets,
outperforming state-of-the-art alternatives.
|
[
{
"created": "Tue, 23 Jun 2020 16:06:52 GMT",
"version": "v1"
},
{
"created": "Wed, 24 Jun 2020 12:53:26 GMT",
"version": "v2"
},
{
"created": "Tue, 3 Nov 2020 18:02:12 GMT",
"version": "v3"
}
] |
2020-11-04
|
[
[
"Lahoti",
"Preethi",
""
],
[
"Beutel",
"Alex",
""
],
[
"Chen",
"Jilin",
""
],
[
"Lee",
"Kang",
""
],
[
"Prost",
"Flavien",
""
],
[
"Thain",
"Nithum",
""
],
[
"Wang",
"Xuezhi",
""
],
[
"Chi",
"Ed H.",
""
]
] |
Much of the previous machine learning (ML) fairness literature assumes that protected features such as race and sex are present in the dataset, and relies upon them to mitigate fairness concerns. However, in practice factors like privacy and regulation often preclude the collection of protected features, or their use for training or inference, severely limiting the applicability of traditional fairness research. Therefore we ask: How can we train an ML model to improve fairness when we do not even know the protected group memberships? In this work we address this problem by proposing Adversarially Reweighted Learning (ARL). In particular, we hypothesize that non-protected features and task labels are valuable for identifying fairness issues, and can be used to co-train an adversarial reweighting approach for improving fairness. Our results show that {ARL} improves Rawlsian Max-Min fairness, with notable AUC improvements for worst-case protected groups in multiple datasets, outperforming state-of-the-art alternatives.
|
1506.03551
|
Abolfazl Diyanat
|
Ahmad Khonsari, Seyed Pooya Shariatpanahi, Abolfazl Diyanat, Hossein
Shafiei
|
On the Feasibility of Wireless Interconnects for High-throughput Data
Centers
| null | null | null | null |
cs.NI cs.PF
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Data Centers (DCs) are required to be scalable to large data sets so as to
accommodate ever increasing demands of resource-limited embedded and mobile
devices. Thanks to the availability of recent high data rate millimeter-wave
frequency spectrum such as 60GHz and due to the favorable attributes of this
technology, wireless DC (WDC) exhibits the potentials of being a promising
solution especially for small to medium scale DCs. This paper investigates the
problem of throughput scalability of WDCs using the established theory of the
asymptotic throughput of wireless multi-hop networks that are primarily
proposed for homogeneous traffic conditions. The rate-heterogeneous traffic
distribution of a data center however, requires the asymptotic heterogeneous
throughput knowledge of a wireless network in order to study the performance
and feasibility of WDCs for practical purposes. To answer these questions this
paper presents a lower bound for the throughput scalability of a multi-hop
rate-heterogeneous network when traffic generation rates of all nodes are
similar, except one node. We demonstrate that the throughput scalability of
conventional multi-hopping and the spatial reuse of the above bi-rate network
is inefficient and henceforth develop a speculative 2-partitioning scheme that
improves the network throughput scaling potentials. A better lower bound of the
throughput is then obtained. Finally, we obtain the throughput scaling of an
i.i.d. rate-heterogeneous network and obtain its lower bound. Again we propose
a speculative 2-partitioning scheme to achieve a network with higher throughput
in terms of improved lower bound. All of the obtained results have been
verified using simulation experiments.
|
[
{
"created": "Thu, 11 Jun 2015 06:02:06 GMT",
"version": "v1"
}
] |
2015-06-12
|
[
[
"Khonsari",
"Ahmad",
""
],
[
"Shariatpanahi",
"Seyed Pooya",
""
],
[
"Diyanat",
"Abolfazl",
""
],
[
"Shafiei",
"Hossein",
""
]
] |
Data Centers (DCs) are required to be scalable to large data sets so as to accommodate ever increasing demands of resource-limited embedded and mobile devices. Thanks to the availability of recent high data rate millimeter-wave frequency spectrum such as 60GHz and due to the favorable attributes of this technology, wireless DC (WDC) exhibits the potentials of being a promising solution especially for small to medium scale DCs. This paper investigates the problem of throughput scalability of WDCs using the established theory of the asymptotic throughput of wireless multi-hop networks that are primarily proposed for homogeneous traffic conditions. The rate-heterogeneous traffic distribution of a data center however, requires the asymptotic heterogeneous throughput knowledge of a wireless network in order to study the performance and feasibility of WDCs for practical purposes. To answer these questions this paper presents a lower bound for the throughput scalability of a multi-hop rate-heterogeneous network when traffic generation rates of all nodes are similar, except one node. We demonstrate that the throughput scalability of conventional multi-hopping and the spatial reuse of the above bi-rate network is inefficient and henceforth develop a speculative 2-partitioning scheme that improves the network throughput scaling potentials. A better lower bound of the throughput is then obtained. Finally, we obtain the throughput scaling of an i.i.d. rate-heterogeneous network and obtain its lower bound. Again we propose a speculative 2-partitioning scheme to achieve a network with higher throughput in terms of improved lower bound. All of the obtained results have been verified using simulation experiments.
|
2002.11023
|
Carlos Bobed
|
Mar\'ia G. Buey and Carlos Bobed and Jorge Gracia and Eduardo Mena
|
Semantic Relatedness for Keyword Disambiguation: Exploiting Different
Embeddings
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Understanding the meaning of words is crucial for many tasks that involve
human-machine interaction. This has been tackled by research in Word Sense
Disambiguation (WSD) in the Natural Language Processing (NLP) field. Recently,
WSD and many other NLP tasks have taken advantage of embeddings-based
representation of words, sentences, and documents. However, when it comes to
WSD, most embeddings models suffer from ambiguity as they do not capture the
different possible meanings of the words. Even when they do, the list of
possible meanings for a word (sense inventory) has to be known in advance at
training time to be included in the embeddings space. Unfortunately, there are
situations in which such a sense inventory is not known in advance (e.g., an
ontology selected at run-time), or it evolves with time and its status diverges
from the one at training time. This hampers the use of embeddings models for
WSD. Furthermore, traditional WSD techniques do not perform well in situations
in which the available linguistic information is very scarce, such as the case
of keyword-based queries. In this paper, we propose an approach to keyword
disambiguation which grounds on a semantic relatedness between words and senses
provided by an external inventory (ontology) that is not known at training
time. Building on previous works, we present a semantic relatedness measure
that uses word embeddings, and explore different disambiguation algorithms to
also exploit both word and sentence representations. Experimental results show
that this approach achieves results comparable with the state of the art when
applied for WSD, without training for a particular domain.
|
[
{
"created": "Tue, 25 Feb 2020 16:44:50 GMT",
"version": "v1"
}
] |
2020-02-26
|
[
[
"Buey",
"María G.",
""
],
[
"Bobed",
"Carlos",
""
],
[
"Gracia",
"Jorge",
""
],
[
"Mena",
"Eduardo",
""
]
] |
Understanding the meaning of words is crucial for many tasks that involve human-machine interaction. This has been tackled by research in Word Sense Disambiguation (WSD) in the Natural Language Processing (NLP) field. Recently, WSD and many other NLP tasks have taken advantage of embeddings-based representation of words, sentences, and documents. However, when it comes to WSD, most embeddings models suffer from ambiguity as they do not capture the different possible meanings of the words. Even when they do, the list of possible meanings for a word (sense inventory) has to be known in advance at training time to be included in the embeddings space. Unfortunately, there are situations in which such a sense inventory is not known in advance (e.g., an ontology selected at run-time), or it evolves with time and its status diverges from the one at training time. This hampers the use of embeddings models for WSD. Furthermore, traditional WSD techniques do not perform well in situations in which the available linguistic information is very scarce, such as the case of keyword-based queries. In this paper, we propose an approach to keyword disambiguation which grounds on a semantic relatedness between words and senses provided by an external inventory (ontology) that is not known at training time. Building on previous works, we present a semantic relatedness measure that uses word embeddings, and explore different disambiguation algorithms to also exploit both word and sentence representations. Experimental results show that this approach achieves results comparable with the state of the art when applied for WSD, without training for a particular domain.
|
2310.13544
|
Vil\'em Zouhar
|
Shehzaad Dhuliawala, Vil\'em Zouhar, Mennatallah El-Assady, Mrinmaya
Sachan
|
A Diachronic Perspective on User Trust in AI under Uncertainty
|
EMNLP 2023, 14 pages (8+6)
| null | null | null |
cs.CL cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
In a human-AI collaboration, users build a mental model of the AI system
based on its reliability and how it presents its decision, e.g. its
presentation of system confidence and an explanation of the output. Modern NLP
systems are often uncalibrated, resulting in confidently incorrect predictions
that undermine user trust. In order to build trustworthy AI, we must understand
how user trust is developed and how it can be regained after potential
trust-eroding events. We study the evolution of user trust in response to these
trust-eroding events using a betting game. We find that even a few incorrect
instances with inaccurate confidence estimates damage user trust and
performance, with very slow recovery. We also show that this degradation in
trust reduces the success of human-AI collaboration and that different types of
miscalibration -- unconfidently correct and confidently incorrect -- have
different negative effects on user trust. Our findings highlight the importance
of calibration in user-facing AI applications and shed light on what aspects
help users decide whether to trust the AI system.
|
[
{
"created": "Fri, 20 Oct 2023 14:41:46 GMT",
"version": "v1"
}
] |
2023-10-23
|
[
[
"Dhuliawala",
"Shehzaad",
""
],
[
"Zouhar",
"Vilém",
""
],
[
"El-Assady",
"Mennatallah",
""
],
[
"Sachan",
"Mrinmaya",
""
]
] |
In a human-AI collaboration, users build a mental model of the AI system based on its reliability and how it presents its decision, e.g. its presentation of system confidence and an explanation of the output. Modern NLP systems are often uncalibrated, resulting in confidently incorrect predictions that undermine user trust. In order to build trustworthy AI, we must understand how user trust is developed and how it can be regained after potential trust-eroding events. We study the evolution of user trust in response to these trust-eroding events using a betting game. We find that even a few incorrect instances with inaccurate confidence estimates damage user trust and performance, with very slow recovery. We also show that this degradation in trust reduces the success of human-AI collaboration and that different types of miscalibration -- unconfidently correct and confidently incorrect -- have different negative effects on user trust. Our findings highlight the importance of calibration in user-facing AI applications and shed light on what aspects help users decide whether to trust the AI system.
|
2202.00315
|
Johannes Wolf K\"unzel
|
Clemens Seibold, Johannes K\"unzel, Anna Hilsmann, Peter Eisert
|
From Explanations to Segmentation: Using Explainable AI for Image
Segmentation
|
to be published in: 17th International Conference on Computer Vision
Theory and Applications (VISAPP), February 2022
| null |
10.5220/0010893600003124
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The new era of image segmentation leveraging the power of Deep Neural Nets
(DNNs) comes with a price tag: to train a neural network for pixel-wise
segmentation, a large amount of training samples has to be manually labeled on
pixel-precision. In this work, we address this by following an indirect
solution. We build upon the advances of the Explainable AI (XAI) community and
extract a pixel-wise binary segmentation from the output of the Layer-wise
Relevance Propagation (LRP) explaining the decision of a classification
network. We show that we achieve similar results compared to an established
U-Net segmentation architecture, while the generation of the training data is
significantly simplified. The proposed method can be trained in a weakly
supervised fashion, as the training samples must be only labeled on
image-level, at the same time enabling the output of a segmentation mask. This
makes it especially applicable to a wider range of real applications where
tedious pixel-level labelling is often not possible.
|
[
{
"created": "Tue, 1 Feb 2022 10:26:10 GMT",
"version": "v1"
}
] |
2023-03-01
|
[
[
"Seibold",
"Clemens",
""
],
[
"Künzel",
"Johannes",
""
],
[
"Hilsmann",
"Anna",
""
],
[
"Eisert",
"Peter",
""
]
] |
The new era of image segmentation leveraging the power of Deep Neural Nets (DNNs) comes with a price tag: to train a neural network for pixel-wise segmentation, a large amount of training samples has to be manually labeled on pixel-precision. In this work, we address this by following an indirect solution. We build upon the advances of the Explainable AI (XAI) community and extract a pixel-wise binary segmentation from the output of the Layer-wise Relevance Propagation (LRP) explaining the decision of a classification network. We show that we achieve similar results compared to an established U-Net segmentation architecture, while the generation of the training data is significantly simplified. The proposed method can be trained in a weakly supervised fashion, as the training samples must be only labeled on image-level, at the same time enabling the output of a segmentation mask. This makes it especially applicable to a wider range of real applications where tedious pixel-level labelling is often not possible.
|
2204.01027
|
Yuya Hasegawa
|
Yuya Hasegawa, Ikehata Satoshi, Kiyoharu Aizawa
|
Distortion-Aware Self-Supervised 360{\deg} Depth Estimation from A
Single Equirectangular Projection Image
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
360{\deg} images are widely available over the last few years. This paper
proposes a new technique for single 360{\deg} image depth prediction under open
environments. Depth prediction from a 360{\deg} single image is not easy for
two reasons. One is the limitation of supervision datasets - the currently
available dataset is limited to indoor scenes. The other is the problems caused
by Equirectangular Projection Format (ERP), commonly used for 360{\deg} images,
that are coordinate and distortion. There is only one method existing that uses
cube map projection to produce six perspective images and apply self-supervised
learning using motion pictures for perspective depth prediction to deal with
these problems. Different from the existing method, we directly use the ERP
format. We propose a framework of direct use of ERP with coordinate conversion
of correspondences and distortion-aware upsampling module to deal with the ERP
related problems and extend a self-supervised learning method for open
environments. For the experiments, we firstly built a dataset for the
evaluation, and quantitatively evaluate the depth prediction in outdoor scenes.
We show that it outperforms the state-of-the-art technique
|
[
{
"created": "Sun, 3 Apr 2022 08:28:44 GMT",
"version": "v1"
}
] |
2022-04-05
|
[
[
"Hasegawa",
"Yuya",
""
],
[
"Satoshi",
"Ikehata",
""
],
[
"Aizawa",
"Kiyoharu",
""
]
] |
360{\deg} images are widely available over the last few years. This paper proposes a new technique for single 360{\deg} image depth prediction under open environments. Depth prediction from a 360{\deg} single image is not easy for two reasons. One is the limitation of supervision datasets - the currently available dataset is limited to indoor scenes. The other is the problems caused by Equirectangular Projection Format (ERP), commonly used for 360{\deg} images, that are coordinate and distortion. There is only one method existing that uses cube map projection to produce six perspective images and apply self-supervised learning using motion pictures for perspective depth prediction to deal with these problems. Different from the existing method, we directly use the ERP format. We propose a framework of direct use of ERP with coordinate conversion of correspondences and distortion-aware upsampling module to deal with the ERP related problems and extend a self-supervised learning method for open environments. For the experiments, we firstly built a dataset for the evaluation, and quantitatively evaluate the depth prediction in outdoor scenes. We show that it outperforms the state-of-the-art technique
|
2406.15303
|
Yunlong Zhang
|
Yunlong Zhang and Zhongyi Shui and Yunxuan Sun and Honglin Li and
Jingxiong Li and Chenglu Zhu and Sunyi Zheng and Lin Yang
|
ADR: Attention Diversification Regularization for Mitigating Overfitting
in Multiple Instance Learning based Whole Slide Image Classification
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multiple Instance Learning (MIL) has demonstrated effectiveness in analyzing
whole slide images (WSIs), yet it often encounters overfitting challenges in
real-world applications. This paper reveals the correlation between MIL's
performance and the entropy of attention values. Based on this observation, we
propose Attention Diversity Regularization (ADR), a simple but effective
technique aimed at promoting high entropy in attention values. Specifically,
ADR introduces a negative Shannon entropy loss for attention values into the
regular MIL framework. Compared to existing methods aimed at alleviating
overfitting, which often necessitate additional modules or processing steps,
our ADR approach requires no such extras, demonstrating simplicity and
efficiency. We evaluate our ADR on three WSI classification tasks. ADR achieves
superior performance over the state-of-the-art on most of them. We also show
that ADR can enhance heatmaps, aligning them better with pathologists'
diagnostic criteria. The source code is available at
\url{https://github.com/dazhangyu123/ADR}.
|
[
{
"created": "Tue, 18 Jun 2024 02:01:17 GMT",
"version": "v1"
}
] |
2024-06-24
|
[
[
"Zhang",
"Yunlong",
""
],
[
"Shui",
"Zhongyi",
""
],
[
"Sun",
"Yunxuan",
""
],
[
"Li",
"Honglin",
""
],
[
"Li",
"Jingxiong",
""
],
[
"Zhu",
"Chenglu",
""
],
[
"Zheng",
"Sunyi",
""
],
[
"Yang",
"Lin",
""
]
] |
Multiple Instance Learning (MIL) has demonstrated effectiveness in analyzing whole slide images (WSIs), yet it often encounters overfitting challenges in real-world applications. This paper reveals the correlation between MIL's performance and the entropy of attention values. Based on this observation, we propose Attention Diversity Regularization (ADR), a simple but effective technique aimed at promoting high entropy in attention values. Specifically, ADR introduces a negative Shannon entropy loss for attention values into the regular MIL framework. Compared to existing methods aimed at alleviating overfitting, which often necessitate additional modules or processing steps, our ADR approach requires no such extras, demonstrating simplicity and efficiency. We evaluate our ADR on three WSI classification tasks. ADR achieves superior performance over the state-of-the-art on most of them. We also show that ADR can enhance heatmaps, aligning them better with pathologists' diagnostic criteria. The source code is available at \url{https://github.com/dazhangyu123/ADR}.
|
2301.06724
|
Emily Dolson
|
Emily Dolson
|
Calculating lexicase selection probabilities is NP-Hard
| null | null |
10.1145/3583131.3590356
| null |
cs.NE cs.CC
|
http://creativecommons.org/licenses/by/4.0/
|
Calculating the probability of an individual solution being selected under
lexicase selection is an important problem in attempts to develop a deeper
theoretical understanding of lexicase selection, a state-of-the art parent
selection algorithm in evolutionary computation. Discovering a fast solution to
this problem would also have implications for efforts to develop practical
improvements to lexicase selection. Here, I prove that this problem, which I
name lex-prob, is NP-Hard. I achieve this proof by reducing SAT, a well-known
NP-Complete problem, to lex-prob in polynomial time. This reduction involves an
intermediate step in which a popular variant of lexicase selection,
epsilon-lexicase selection, is reduced to standard lexicase selection. This
proof has important practical implications for anyone needing a fast way of
calculating the probabilities of individual solutions being selected under
lexicase selection. Doing so in polynomial time would be incredibly
challenging, if not all-together impossible. Thus, finding approximation
algorithms or practical optimizations for speeding up the brute-force solution
is likely more worthwhile. This result also has deeper theoretical implications
about the relationship between epsilon-lexicase selection and lexicase
selection and the relationship between lex-prob and other NP-Hard problems.
|
[
{
"created": "Tue, 17 Jan 2023 06:51:44 GMT",
"version": "v1"
},
{
"created": "Sat, 22 Apr 2023 22:16:41 GMT",
"version": "v2"
}
] |
2023-04-25
|
[
[
"Dolson",
"Emily",
""
]
] |
Calculating the probability of an individual solution being selected under lexicase selection is an important problem in attempts to develop a deeper theoretical understanding of lexicase selection, a state-of-the art parent selection algorithm in evolutionary computation. Discovering a fast solution to this problem would also have implications for efforts to develop practical improvements to lexicase selection. Here, I prove that this problem, which I name lex-prob, is NP-Hard. I achieve this proof by reducing SAT, a well-known NP-Complete problem, to lex-prob in polynomial time. This reduction involves an intermediate step in which a popular variant of lexicase selection, epsilon-lexicase selection, is reduced to standard lexicase selection. This proof has important practical implications for anyone needing a fast way of calculating the probabilities of individual solutions being selected under lexicase selection. Doing so in polynomial time would be incredibly challenging, if not all-together impossible. Thus, finding approximation algorithms or practical optimizations for speeding up the brute-force solution is likely more worthwhile. This result also has deeper theoretical implications about the relationship between epsilon-lexicase selection and lexicase selection and the relationship between lex-prob and other NP-Hard problems.
|
2211.02250
|
Bandhav Veluri
|
Bandhav Veluri, Justin Chan, Malek Itani, Tuochao Chen, Takuya
Yoshioka, Shyamnath Gollakota
|
Real-Time Target Sound Extraction
|
ICASSP 2023 camera-ready
| null | null | null |
cs.SD cs.LG eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
We present the first neural network model to achieve real-time and streaming
target sound extraction. To accomplish this, we propose Waveformer, an
encoder-decoder architecture with a stack of dilated causal convolution layers
as the encoder, and a transformer decoder layer as the decoder. This hybrid
architecture uses dilated causal convolutions for processing large receptive
fields in a computationally efficient manner while also leveraging the
generalization performance of transformer-based architectures. Our evaluations
show as much as 2.2-3.3 dB improvement in SI-SNRi compared to the prior models
for this task while having a 1.2-4x smaller model size and a 1.5-2x lower
runtime. We provide code, dataset, and audio samples:
https://waveformer.cs.washington.edu/.
|
[
{
"created": "Fri, 4 Nov 2022 03:51:23 GMT",
"version": "v1"
},
{
"created": "Mon, 14 Nov 2022 23:56:23 GMT",
"version": "v2"
},
{
"created": "Wed, 19 Apr 2023 09:43:32 GMT",
"version": "v3"
}
] |
2023-04-20
|
[
[
"Veluri",
"Bandhav",
""
],
[
"Chan",
"Justin",
""
],
[
"Itani",
"Malek",
""
],
[
"Chen",
"Tuochao",
""
],
[
"Yoshioka",
"Takuya",
""
],
[
"Gollakota",
"Shyamnath",
""
]
] |
We present the first neural network model to achieve real-time and streaming target sound extraction. To accomplish this, we propose Waveformer, an encoder-decoder architecture with a stack of dilated causal convolution layers as the encoder, and a transformer decoder layer as the decoder. This hybrid architecture uses dilated causal convolutions for processing large receptive fields in a computationally efficient manner while also leveraging the generalization performance of transformer-based architectures. Our evaluations show as much as 2.2-3.3 dB improvement in SI-SNRi compared to the prior models for this task while having a 1.2-4x smaller model size and a 1.5-2x lower runtime. We provide code, dataset, and audio samples: https://waveformer.cs.washington.edu/.
|
1412.6853
|
Renato Fabbri
|
Renato Fabbri, Vilson Vieira da Silva Junior, Ant\^onio Carlos Silvano
Pessotti, D\'ebora Cristina Corr\^ea, Osvaldo N. Oliveira Jr
|
Musical elements in the discrete-time representation of sound
|
A software toolbox, a Python Package, musical pieces and further
documents are in: https://github.com/ttm/mass
| null | null | null |
cs.SD physics.pop-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The representation of basic elements of music in terms of discrete audio
signals is often used in software for musical creation and design.
Nevertheless, there is no unified approach that relates these elements to the
discrete samples of digitized sound. In this article, each musical element is
related by equations and algorithms to the discrete-time samples of sounds, and
each of these relations are implemented in scripts within a software toolbox,
referred to as MASS (Music and Audio in Sample Sequences). The fundamental
element, the musical note with duration, volume, pitch and timbre, is related
quantitatively to characteristics of the digital signal. Internal variations of
a note, such as tremolos, vibratos and spectral fluctuations, are also
considered, which enables the synthesis of notes inspired by real instruments
and new sonorities. With this representation of notes, resources are provided
for the generation of higher scale musical structures, such as rhythmic meter,
pitch intervals and cycles. This framework enables precise and trustful
scientific experiments, data sonification and is useful for education and art.
The efficacy of MASS is confirmed by the synthesis of small musical pieces
using basic notes, elaborated notes and notes in music, which reflects the
organization of the toolbox and thus of this article. It is possible to
synthesize whole albums through collage of the scripts and settings specified
by the user. With the open source paradigm, the toolbox can be promptly
scrutinized, expanded in co-authorship processes and used with freedom by
musicians, engineers and other interested parties. In fact, MASS has already
been employed for diverse purposes which include music production, artistic
presentations, psychoacoustic experiments and computer language diffusion where
the appeal of audiovisual artifacts is exploited for education.
|
[
{
"created": "Mon, 22 Dec 2014 01:04:53 GMT",
"version": "v1"
},
{
"created": "Thu, 26 Oct 2017 23:07:52 GMT",
"version": "v2"
}
] |
2017-10-30
|
[
[
"Fabbri",
"Renato",
""
],
[
"Junior",
"Vilson Vieira da Silva",
""
],
[
"Pessotti",
"Antônio Carlos Silvano",
""
],
[
"Corrêa",
"Débora Cristina",
""
],
[
"Oliveira",
"Osvaldo N.",
"Jr"
]
] |
The representation of basic elements of music in terms of discrete audio signals is often used in software for musical creation and design. Nevertheless, there is no unified approach that relates these elements to the discrete samples of digitized sound. In this article, each musical element is related by equations and algorithms to the discrete-time samples of sounds, and each of these relations are implemented in scripts within a software toolbox, referred to as MASS (Music and Audio in Sample Sequences). The fundamental element, the musical note with duration, volume, pitch and timbre, is related quantitatively to characteristics of the digital signal. Internal variations of a note, such as tremolos, vibratos and spectral fluctuations, are also considered, which enables the synthesis of notes inspired by real instruments and new sonorities. With this representation of notes, resources are provided for the generation of higher scale musical structures, such as rhythmic meter, pitch intervals and cycles. This framework enables precise and trustful scientific experiments, data sonification and is useful for education and art. The efficacy of MASS is confirmed by the synthesis of small musical pieces using basic notes, elaborated notes and notes in music, which reflects the organization of the toolbox and thus of this article. It is possible to synthesize whole albums through collage of the scripts and settings specified by the user. With the open source paradigm, the toolbox can be promptly scrutinized, expanded in co-authorship processes and used with freedom by musicians, engineers and other interested parties. In fact, MASS has already been employed for diverse purposes which include music production, artistic presentations, psychoacoustic experiments and computer language diffusion where the appeal of audiovisual artifacts is exploited for education.
|
1808.05946
|
Hao Chen
|
Hao Chen, Maria Vasardani, Stephan Winter
|
Disambiguating fine-grained place names from descriptions by clustering
| null | null | null | null |
cs.IR cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Everyday place descriptions often contain place names of fine-grained
features, such as buildings or businesses, that are more difficult to
disambiguate than names referring to larger places, for example cities or
natural geographic features. Fine-grained places are often significantly more
frequent and more similar to each other, and disambiguation heuristics
developed for larger places, such as those based on population or containment
relationships, are often not applicable in these cases. In this research, we
address the disambiguation of fine-grained place names from everyday place
descriptions. For this purpose, we evaluate the performance of different
existing clustering-based approaches, since clustering approaches require no
more knowledge other than the locations of ambiguous place names. We consider
not only approaches developed specifically for place name disambiguation, but
also clustering algorithms developed for general data mining that could
potentially be leveraged. We compare these methods with a novel algorithm, and
show that the novel algorithm outperforms the other algorithms in terms of
disambiguation precision and distance error over several tested datasets.
|
[
{
"created": "Fri, 17 Aug 2018 05:14:41 GMT",
"version": "v1"
}
] |
2018-08-21
|
[
[
"Chen",
"Hao",
""
],
[
"Vasardani",
"Maria",
""
],
[
"Winter",
"Stephan",
""
]
] |
Everyday place descriptions often contain place names of fine-grained features, such as buildings or businesses, that are more difficult to disambiguate than names referring to larger places, for example cities or natural geographic features. Fine-grained places are often significantly more frequent and more similar to each other, and disambiguation heuristics developed for larger places, such as those based on population or containment relationships, are often not applicable in these cases. In this research, we address the disambiguation of fine-grained place names from everyday place descriptions. For this purpose, we evaluate the performance of different existing clustering-based approaches, since clustering approaches require no more knowledge other than the locations of ambiguous place names. We consider not only approaches developed specifically for place name disambiguation, but also clustering algorithms developed for general data mining that could potentially be leveraged. We compare these methods with a novel algorithm, and show that the novel algorithm outperforms the other algorithms in terms of disambiguation precision and distance error over several tested datasets.
|
1611.01421
|
Saeed Reza Kheradpisheh
|
Saeed Reza Kheradpisheh, Mohammad Ganjtabesh, Simon J Thorpe,
Timoth\'ee Masquelier
|
STDP-based spiking deep convolutional neural networks for object
recognition
| null |
Neural Networks 2018
|
10.1016/j.neunet.2017.12.005
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Previous studies have shown that spike-timing-dependent plasticity (STDP) can
be used in spiking neural networks (SNN) to extract visual features of low or
intermediate complexity in an unsupervised manner. These studies, however, used
relatively shallow architectures, and only one layer was trainable. Another
line of research has demonstrated - using rate-based neural networks trained
with back-propagation - that having many layers increases the recognition
robustness, an approach known as deep learning. We thus designed a deep SNN,
comprising several convolutional (trainable with STDP) and pooling layers. We
used a temporal coding scheme where the most strongly activated neurons fire
first, and less activated neurons fire later or not at all. The network was
exposed to natural images. Thanks to STDP, neurons progressively learned
features corresponding to prototypical patterns that were both salient and
frequent. Only a few tens of examples per category were required and no label
was needed. After learning, the complexity of the extracted features increased
along the hierarchy, from edge detectors in the first layer to object
prototypes in the last layer. Coding was very sparse, with only a few thousands
spikes per image, and in some cases the object category could be reasonably
well inferred from the activity of a single higher-order neuron. More
generally, the activity of a few hundreds of such neurons contained robust
category information, as demonstrated using a classifier on Caltech 101,
ETH-80, and MNIST databases. We also demonstrate the superiority of STDP over
other unsupervised techniques such as random crops (HMAX) or auto-encoders.
Taken together, our results suggest that the combination of STDP with latency
coding may be a key to understanding the way that the primate visual system
learns, its remarkable processing speed and its low energy consumption.
|
[
{
"created": "Fri, 4 Nov 2016 15:25:13 GMT",
"version": "v1"
},
{
"created": "Thu, 2 Nov 2017 14:28:09 GMT",
"version": "v2"
},
{
"created": "Mon, 25 Dec 2017 12:57:57 GMT",
"version": "v3"
}
] |
2018-03-12
|
[
[
"Kheradpisheh",
"Saeed Reza",
""
],
[
"Ganjtabesh",
"Mohammad",
""
],
[
"Thorpe",
"Simon J",
""
],
[
"Masquelier",
"Timothée",
""
]
] |
Previous studies have shown that spike-timing-dependent plasticity (STDP) can be used in spiking neural networks (SNN) to extract visual features of low or intermediate complexity in an unsupervised manner. These studies, however, used relatively shallow architectures, and only one layer was trainable. Another line of research has demonstrated - using rate-based neural networks trained with back-propagation - that having many layers increases the recognition robustness, an approach known as deep learning. We thus designed a deep SNN, comprising several convolutional (trainable with STDP) and pooling layers. We used a temporal coding scheme where the most strongly activated neurons fire first, and less activated neurons fire later or not at all. The network was exposed to natural images. Thanks to STDP, neurons progressively learned features corresponding to prototypical patterns that were both salient and frequent. Only a few tens of examples per category were required and no label was needed. After learning, the complexity of the extracted features increased along the hierarchy, from edge detectors in the first layer to object prototypes in the last layer. Coding was very sparse, with only a few thousands spikes per image, and in some cases the object category could be reasonably well inferred from the activity of a single higher-order neuron. More generally, the activity of a few hundreds of such neurons contained robust category information, as demonstrated using a classifier on Caltech 101, ETH-80, and MNIST databases. We also demonstrate the superiority of STDP over other unsupervised techniques such as random crops (HMAX) or auto-encoders. Taken together, our results suggest that the combination of STDP with latency coding may be a key to understanding the way that the primate visual system learns, its remarkable processing speed and its low energy consumption.
|
2312.15068
|
Xingfang Wu
|
Xingfang Wu, Heng Li, Nobukazu Yoshioka, Hironori Washizaki, Foutse
Khomh
|
Refining GPT-3 Embeddings with a Siamese Structure for Technical Post
Duplicate Detection
|
SANER 2024
| null | null | null |
cs.SE cs.CL cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
One goal of technical online communities is to help developers find the right
answer in one place. A single question can be asked in different ways with
different wordings, leading to the existence of duplicate posts on technical
forums. The question of how to discover and link duplicate posts has garnered
the attention of both developer communities and researchers. For example, Stack
Overflow adopts a voting-based mechanism to mark and close duplicate posts.
However, addressing these constantly emerging duplicate posts in a timely
manner continues to pose challenges. Therefore, various approaches have been
proposed to detect duplicate posts on technical forum posts automatically. The
existing methods suffer from limitations either due to their reliance on
handcrafted similarity metrics which can not sufficiently capture the semantics
of posts, or their lack of supervision to improve the performance.
Additionally, the efficiency of these methods is hindered by their dependence
on pair-wise feature generation, which can be impractical for large amount of
data. In this work, we attempt to employ and refine the GPT-3 embeddings for
the duplicate detection task. We assume that the GPT-3 embeddings can
accurately represent the semantics of the posts. In addition, by training a
Siamese-based network based on the GPT-3 embeddings, we obtain a latent
embedding that accurately captures the duplicate relation in technical forum
posts. Our experiment on a benchmark dataset confirms the effectiveness of our
approach and demonstrates superior performance compared to baseline methods.
When applied to the dataset we constructed with a recent Stack Overflow dump,
our approach attains a Top-1, Top-5, and Top-30 accuracy of 23.1%, 43.9%, and
68.9%, respectively. With a manual study, we confirm our approach's potential
of finding unlabelled duplicates on technical forums.
|
[
{
"created": "Fri, 22 Dec 2023 21:14:37 GMT",
"version": "v1"
},
{
"created": "Mon, 4 Mar 2024 17:03:42 GMT",
"version": "v2"
}
] |
2024-03-05
|
[
[
"Wu",
"Xingfang",
""
],
[
"Li",
"Heng",
""
],
[
"Yoshioka",
"Nobukazu",
""
],
[
"Washizaki",
"Hironori",
""
],
[
"Khomh",
"Foutse",
""
]
] |
One goal of technical online communities is to help developers find the right answer in one place. A single question can be asked in different ways with different wordings, leading to the existence of duplicate posts on technical forums. The question of how to discover and link duplicate posts has garnered the attention of both developer communities and researchers. For example, Stack Overflow adopts a voting-based mechanism to mark and close duplicate posts. However, addressing these constantly emerging duplicate posts in a timely manner continues to pose challenges. Therefore, various approaches have been proposed to detect duplicate posts on technical forum posts automatically. The existing methods suffer from limitations either due to their reliance on handcrafted similarity metrics which can not sufficiently capture the semantics of posts, or their lack of supervision to improve the performance. Additionally, the efficiency of these methods is hindered by their dependence on pair-wise feature generation, which can be impractical for large amount of data. In this work, we attempt to employ and refine the GPT-3 embeddings for the duplicate detection task. We assume that the GPT-3 embeddings can accurately represent the semantics of the posts. In addition, by training a Siamese-based network based on the GPT-3 embeddings, we obtain a latent embedding that accurately captures the duplicate relation in technical forum posts. Our experiment on a benchmark dataset confirms the effectiveness of our approach and demonstrates superior performance compared to baseline methods. When applied to the dataset we constructed with a recent Stack Overflow dump, our approach attains a Top-1, Top-5, and Top-30 accuracy of 23.1%, 43.9%, and 68.9%, respectively. With a manual study, we confirm our approach's potential of finding unlabelled duplicates on technical forums.
|
2011.13354
|
Aditya Kalyanpur
|
Aditya Kalyanpur, Tom Breloff, David Ferrucci
|
Braid: Weaving Symbolic and Neural Knowledge into Coherent Logical
Explanations
|
Accepted at AAAI-2022
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Traditional symbolic reasoning engines, while attractive for their precision
and explicability, have a few major drawbacks: the use of brittle inference
procedures that rely on exact matching (unification) of logical terms, an
inability to deal with uncertainty, and the need for a precompiled rule-base of
knowledge (the "knowledge acquisition" problem). To address these issues, we
devise a novel logical reasoner called Braid, that supports probabilistic
rules, and uses the notion of custom unification functions and dynamic rule
generation to overcome the brittle matching and knowledge-gap problem prevalent
in traditional reasoners. In this paper, we describe the reasoning algorithms
used in Braid, and their implementation in a distributed task-based framework
that builds proof/explanation graphs for an input query. We use a simple QA
example from a children's story to motivate Braid's design and explain how the
various components work together to produce a coherent logical explanation.
Finally, we evaluate Braid on the ROC Story Cloze test and achieve close to
state-of-the-art results while providing frame-based explanations.
|
[
{
"created": "Thu, 26 Nov 2020 15:36:06 GMT",
"version": "v1"
},
{
"created": "Wed, 2 Dec 2020 04:44:00 GMT",
"version": "v2"
},
{
"created": "Fri, 11 Dec 2020 17:40:52 GMT",
"version": "v3"
},
{
"created": "Sun, 5 Dec 2021 02:34:30 GMT",
"version": "v4"
}
] |
2021-12-07
|
[
[
"Kalyanpur",
"Aditya",
""
],
[
"Breloff",
"Tom",
""
],
[
"Ferrucci",
"David",
""
]
] |
Traditional symbolic reasoning engines, while attractive for their precision and explicability, have a few major drawbacks: the use of brittle inference procedures that rely on exact matching (unification) of logical terms, an inability to deal with uncertainty, and the need for a precompiled rule-base of knowledge (the "knowledge acquisition" problem). To address these issues, we devise a novel logical reasoner called Braid, that supports probabilistic rules, and uses the notion of custom unification functions and dynamic rule generation to overcome the brittle matching and knowledge-gap problem prevalent in traditional reasoners. In this paper, we describe the reasoning algorithms used in Braid, and their implementation in a distributed task-based framework that builds proof/explanation graphs for an input query. We use a simple QA example from a children's story to motivate Braid's design and explain how the various components work together to produce a coherent logical explanation. Finally, we evaluate Braid on the ROC Story Cloze test and achieve close to state-of-the-art results while providing frame-based explanations.
|
2101.09667
|
Md Abul Bashar
|
Fahim Shahriar, Md Abul Bashar
|
Automatic Monitoring Social Dynamics During Big Incidences: A Case Study
of COVID-19 in Bangladesh
|
Very minor change
| null | null | null |
cs.CY cs.CL cs.LG cs.SI
|
http://creativecommons.org/licenses/by/4.0/
|
Newspapers are trustworthy media where people get the most reliable and
credible information compared with other sources. On the other hand, social
media often spread rumors and misleading news to get more traffic and
attention. Careful characterization, evaluation, and interpretation of
newspaper data can provide insight into intrigue and passionate social issues
to monitor any big social incidence. This study analyzed a large set of
spatio-temporal Bangladeshi newspaper data related to the COVID-19 pandemic.
The methodology included volume analysis, topic analysis, automated
classification, and sentiment analysis of news articles to get insight into the
COVID-19 pandemic in different sectors and regions in Bangladesh over a period
of time. This analysis will help the government and other organizations to
figure out the challenges that have arisen in society due to this pandemic,
what steps should be taken immediately and in the post-pandemic period, how the
government and its allies can come together to address the crisis in the
future, keeping these problems in mind.
|
[
{
"created": "Sun, 24 Jan 2021 07:46:17 GMT",
"version": "v1"
},
{
"created": "Sun, 31 Jan 2021 16:47:37 GMT",
"version": "v2"
}
] |
2021-02-02
|
[
[
"Shahriar",
"Fahim",
""
],
[
"Bashar",
"Md Abul",
""
]
] |
Newspapers are trustworthy media where people get the most reliable and credible information compared with other sources. On the other hand, social media often spread rumors and misleading news to get more traffic and attention. Careful characterization, evaluation, and interpretation of newspaper data can provide insight into intrigue and passionate social issues to monitor any big social incidence. This study analyzed a large set of spatio-temporal Bangladeshi newspaper data related to the COVID-19 pandemic. The methodology included volume analysis, topic analysis, automated classification, and sentiment analysis of news articles to get insight into the COVID-19 pandemic in different sectors and regions in Bangladesh over a period of time. This analysis will help the government and other organizations to figure out the challenges that have arisen in society due to this pandemic, what steps should be taken immediately and in the post-pandemic period, how the government and its allies can come together to address the crisis in the future, keeping these problems in mind.
|
2402.07066
|
Guanyang Wang
|
Prathamesh Dharangutte, Jie Gao, Ruobin Gong, Guanyang Wang
|
Differentially Private Range Queries with Correlated Input Perturbation
|
26 pages, 8 figures
| null | null | null |
cs.CR cs.LG stat.ME
|
http://creativecommons.org/licenses/by/4.0/
|
This work proposes a class of locally differentially private mechanisms for
linear queries, in particular range queries, that leverages correlated input
perturbation to simultaneously achieve unbiasedness, consistency, statistical
transparency, and control over utility requirements in terms of accuracy
targets expressed either in certain query margins or as implied by the
hierarchical database structure. The proposed Cascade Sampling algorithm
instantiates the mechanism exactly and efficiently. Our bounds show that we
obtain near-optimal utility while being empirically competitive against output
perturbation methods.
|
[
{
"created": "Sat, 10 Feb 2024 23:42:05 GMT",
"version": "v1"
}
] |
2024-02-13
|
[
[
"Dharangutte",
"Prathamesh",
""
],
[
"Gao",
"Jie",
""
],
[
"Gong",
"Ruobin",
""
],
[
"Wang",
"Guanyang",
""
]
] |
This work proposes a class of locally differentially private mechanisms for linear queries, in particular range queries, that leverages correlated input perturbation to simultaneously achieve unbiasedness, consistency, statistical transparency, and control over utility requirements in terms of accuracy targets expressed either in certain query margins or as implied by the hierarchical database structure. The proposed Cascade Sampling algorithm instantiates the mechanism exactly and efficiently. Our bounds show that we obtain near-optimal utility while being empirically competitive against output perturbation methods.
|
1706.07372
|
Carmen Torres Lopez
|
Carmen Torres Lopez, Stefan Marr, Hanspeter M\"ossenb\"ock, Elisa
Gonzalez Boix
|
A Study of Concurrency Bugs and Advanced Development Support for
Actor-based Programs
|
- Submitted for review - Removed section 6 "Research Roadmap for
Debuggers", its content was summarized in the Future Work section - Added
references for section 1, section 3, section 4.3 and section 5.1 - Updated
citations
| null | null | null |
cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The actor model is an attractive foundation for developing concurrent
applications because actors are isolated concurrent entities that communicate
through asynchronous messages and do not share state. Thereby, they avoid
concurrency bugs such as data races, but are not immune to concurrency bugs in
general. This study taxonomizes concurrency bugs in actor-based programs
reported in literature. Furthermore, it analyzes the bugs to identify the
patterns causing them as well as their observable behavior. Based on this
taxonomy, we further analyze the literature and find that current approaches to
static analysis and testing focus on communication deadlocks and message
protocol violations. However, they do not provide solutions to identify
livelocks and behavioral deadlocks. The insights obtained in this study can be
used to improve debugging support for actor-based programs with new debugging
techniques to identify the root cause of complex concurrency bugs.
|
[
{
"created": "Thu, 22 Jun 2017 15:31:53 GMT",
"version": "v1"
},
{
"created": "Mon, 23 Apr 2018 16:35:24 GMT",
"version": "v2"
},
{
"created": "Tue, 24 Apr 2018 08:41:27 GMT",
"version": "v3"
}
] |
2018-04-25
|
[
[
"Lopez",
"Carmen Torres",
""
],
[
"Marr",
"Stefan",
""
],
[
"Mössenböck",
"Hanspeter",
""
],
[
"Boix",
"Elisa Gonzalez",
""
]
] |
The actor model is an attractive foundation for developing concurrent applications because actors are isolated concurrent entities that communicate through asynchronous messages and do not share state. Thereby, they avoid concurrency bugs such as data races, but are not immune to concurrency bugs in general. This study taxonomizes concurrency bugs in actor-based programs reported in literature. Furthermore, it analyzes the bugs to identify the patterns causing them as well as their observable behavior. Based on this taxonomy, we further analyze the literature and find that current approaches to static analysis and testing focus on communication deadlocks and message protocol violations. However, they do not provide solutions to identify livelocks and behavioral deadlocks. The insights obtained in this study can be used to improve debugging support for actor-based programs with new debugging techniques to identify the root cause of complex concurrency bugs.
|
2005.03724
|
Yang Gao
|
Yang Gao, Wei Zhao, Steffen Eger
|
SUPERT: Towards New Frontiers in Unsupervised Evaluation Metrics for
Multi-Document Summarization
|
ACL 2020
| null | null | null |
cs.CL cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study unsupervised multi-document summarization evaluation metrics, which
require neither human-written reference summaries nor human annotations (e.g.
preferences, ratings, etc.). We propose SUPERT, which rates the quality of a
summary by measuring its semantic similarity with a pseudo reference summary,
i.e. selected salient sentences from the source documents, using contextualized
embeddings and soft token alignment techniques. Compared to the
state-of-the-art unsupervised evaluation metrics, SUPERT correlates better with
human ratings by 18-39%. Furthermore, we use SUPERT as rewards to guide a
neural-based reinforcement learning summarizer, yielding favorable performance
compared to the state-of-the-art unsupervised summarizers. All source code is
available at https://github.com/yg211/acl20-ref-free-eval.
|
[
{
"created": "Thu, 7 May 2020 19:54:24 GMT",
"version": "v1"
}
] |
2020-05-11
|
[
[
"Gao",
"Yang",
""
],
[
"Zhao",
"Wei",
""
],
[
"Eger",
"Steffen",
""
]
] |
We study unsupervised multi-document summarization evaluation metrics, which require neither human-written reference summaries nor human annotations (e.g. preferences, ratings, etc.). We propose SUPERT, which rates the quality of a summary by measuring its semantic similarity with a pseudo reference summary, i.e. selected salient sentences from the source documents, using contextualized embeddings and soft token alignment techniques. Compared to the state-of-the-art unsupervised evaluation metrics, SUPERT correlates better with human ratings by 18-39%. Furthermore, we use SUPERT as rewards to guide a neural-based reinforcement learning summarizer, yielding favorable performance compared to the state-of-the-art unsupervised summarizers. All source code is available at https://github.com/yg211/acl20-ref-free-eval.
|
2206.06247
|
Hugo Tessier
|
Hugo Tessier, Vincent Gripon, Mathieu L\'eonardon, Matthieu Arzel,
David Bertrand, Thomas Hannagan
|
Leveraging Structured Pruning of Convolutional Neural Networks
|
6 pages, 5 figures, submitted to SiPS 2022
| null |
10.1109/SiPS55645.2022.9919253
| null |
cs.NE
|
http://creativecommons.org/licenses/by/4.0/
|
Structured pruning is a popular method to reduce the cost of convolutional
neural networks, that are the state of the art in many computer vision tasks.
However, depending on the architecture, pruning introduces dimensional
discrepancies which prevent the actual reduction of pruned networks. To tackle
this problem, we propose a method that is able to take any structured pruning
mask and generate a network that does not encounter any of these problems and
can be leveraged efficiently. We provide an accurate description of our
solution and show results of gains, in energy consumption and inference time on
embedded hardware, of pruned convolutional neural networks.
|
[
{
"created": "Mon, 13 Jun 2022 15:29:12 GMT",
"version": "v1"
}
] |
2022-12-13
|
[
[
"Tessier",
"Hugo",
""
],
[
"Gripon",
"Vincent",
""
],
[
"Léonardon",
"Mathieu",
""
],
[
"Arzel",
"Matthieu",
""
],
[
"Bertrand",
"David",
""
],
[
"Hannagan",
"Thomas",
""
]
] |
Structured pruning is a popular method to reduce the cost of convolutional neural networks, that are the state of the art in many computer vision tasks. However, depending on the architecture, pruning introduces dimensional discrepancies which prevent the actual reduction of pruned networks. To tackle this problem, we propose a method that is able to take any structured pruning mask and generate a network that does not encounter any of these problems and can be leveraged efficiently. We provide an accurate description of our solution and show results of gains, in energy consumption and inference time on embedded hardware, of pruned convolutional neural networks.
|
1907.00042
|
Wei Cai
|
Tengfei Wang and Shuyi Zhang and Xiao Wu and Wei Cai
|
Rhythm Dungeon: A Blockchain-based Music Roguelike Game
| null |
2019 Foundation of Digital Games Demos (FDG 2019 DEMO), San Luis
Obispo, California, USA, August 26-30, 2019
| null | null |
cs.MM cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Rhythm Dungeon is a rhythm game which leverages the blockchain as a shared
open database. During the gaming session, the player explores a roguelike
dungeon by inputting specific sequences in time to music rhythm. By integrating
smart contract to the game program, the enemies through the venture are
generated from other games which share the identical blockchain. On the other
hand, the player may upload their characters at the end of their journey, so
that their own character may appear in other games and make an influence.
Rhythm Dungeon is designed and implemented to show the potential of
decentralized gaming experience, which utilizes the blockchain to provide
asynchronous interactions among massive players.
|
[
{
"created": "Fri, 28 Jun 2019 19:05:53 GMT",
"version": "v1"
}
] |
2019-07-02
|
[
[
"Wang",
"Tengfei",
""
],
[
"Zhang",
"Shuyi",
""
],
[
"Wu",
"Xiao",
""
],
[
"Cai",
"Wei",
""
]
] |
Rhythm Dungeon is a rhythm game which leverages the blockchain as a shared open database. During the gaming session, the player explores a roguelike dungeon by inputting specific sequences in time to music rhythm. By integrating smart contract to the game program, the enemies through the venture are generated from other games which share the identical blockchain. On the other hand, the player may upload their characters at the end of their journey, so that their own character may appear in other games and make an influence. Rhythm Dungeon is designed and implemented to show the potential of decentralized gaming experience, which utilizes the blockchain to provide asynchronous interactions among massive players.
|
1011.4957
|
Andreas Wiese
|
Jos\'e Verschae and Andreas Wiese
|
On the Configuration-LP for Scheduling on Unrelated Machines
|
12 pages, 1 figure
| null | null |
Report-no: 025-2010
|
cs.DM cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
One of the most important open problems in machine scheduling is the problem
of scheduling a set of jobs on unrelated machines to minimize the makespan. The
best known approximation algorithm for this problem guarantees an approximation
factor of 2. It is known to be NP-hard to approximate with a better ratio than
3/2. Closing this gap has been open for over 20 years. The best known
approximation factors are achieved by LP-based algorithms. The strongest known
linear program formulation for the problem is the configuration-LP. We show
that the configuration-LP has an integrality gap of 2 even for the special case
of unrelated graph balancing, where each job can be assigned to at most two
machines. In particular, our result implies that a large family of cuts does
not help to diminish the integrality gap of the canonical assignment-LP. Also,
we present cases of the problem which can be approximated with a better factor
than 2. They constitute valuable insights for constructing an NP-hardness
reduction which improves the known lower bound. Very recently Svensson studied
the restricted assignment case, where each job can only be assigned to a given
set of machines on which it has the same processing time. He shows that in this
setting the configuration-LP has an integrality gap of 33/17<2. Hence, our
result imply that the unrelated graph balancing case is significantly more
complex than the restricted assignment case. Then we turn to another objective
function: maximizing the minimum machine load. For the case that every job can
be assigned to at most two machines we give a purely combinatorial
2-approximation algorithm which is best possible, unless P=NP. This improves on
the computationally costly LP-based (2+eps)-approximation algorithm by
Chakrabarty et al.
|
[
{
"created": "Mon, 22 Nov 2010 21:30:29 GMT",
"version": "v1"
}
] |
2015-03-17
|
[
[
"Verschae",
"José",
""
],
[
"Wiese",
"Andreas",
""
]
] |
One of the most important open problems in machine scheduling is the problem of scheduling a set of jobs on unrelated machines to minimize the makespan. The best known approximation algorithm for this problem guarantees an approximation factor of 2. It is known to be NP-hard to approximate with a better ratio than 3/2. Closing this gap has been open for over 20 years. The best known approximation factors are achieved by LP-based algorithms. The strongest known linear program formulation for the problem is the configuration-LP. We show that the configuration-LP has an integrality gap of 2 even for the special case of unrelated graph balancing, where each job can be assigned to at most two machines. In particular, our result implies that a large family of cuts does not help to diminish the integrality gap of the canonical assignment-LP. Also, we present cases of the problem which can be approximated with a better factor than 2. They constitute valuable insights for constructing an NP-hardness reduction which improves the known lower bound. Very recently Svensson studied the restricted assignment case, where each job can only be assigned to a given set of machines on which it has the same processing time. He shows that in this setting the configuration-LP has an integrality gap of 33/17<2. Hence, our result imply that the unrelated graph balancing case is significantly more complex than the restricted assignment case. Then we turn to another objective function: maximizing the minimum machine load. For the case that every job can be assigned to at most two machines we give a purely combinatorial 2-approximation algorithm which is best possible, unless P=NP. This improves on the computationally costly LP-based (2+eps)-approximation algorithm by Chakrabarty et al.
|
2012.11352
|
Francesco Ranzato
|
Francesco Ranzato and Marco Zanella
|
Genetic Adversarial Training of Decision Trees
| null | null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
We put forward a novel learning methodology for ensembles of decision trees
based on a genetic algorithm which is able to train a decision tree for
maximizing both its accuracy and its robustness to adversarial perturbations.
This learning algorithm internally leverages a complete formal verification
technique for robustness properties of decision trees based on abstract
interpretation, a well known static program analysis technique. We implemented
this genetic adversarial training algorithm in a tool called Meta-Silvae (MS)
and we experimentally evaluated it on some reference datasets used in
adversarial training. The experimental results show that MS is able to train
robust models that compete with and often improve on the current
state-of-the-art of adversarial training of decision trees while being much
more compact and therefore interpretable and efficient tree models.
|
[
{
"created": "Mon, 21 Dec 2020 14:05:57 GMT",
"version": "v1"
}
] |
2020-12-22
|
[
[
"Ranzato",
"Francesco",
""
],
[
"Zanella",
"Marco",
""
]
] |
We put forward a novel learning methodology for ensembles of decision trees based on a genetic algorithm which is able to train a decision tree for maximizing both its accuracy and its robustness to adversarial perturbations. This learning algorithm internally leverages a complete formal verification technique for robustness properties of decision trees based on abstract interpretation, a well known static program analysis technique. We implemented this genetic adversarial training algorithm in a tool called Meta-Silvae (MS) and we experimentally evaluated it on some reference datasets used in adversarial training. The experimental results show that MS is able to train robust models that compete with and often improve on the current state-of-the-art of adversarial training of decision trees while being much more compact and therefore interpretable and efficient tree models.
|
2002.02061
|
Xiaoguang Li
|
Xiaoguang Li, Hui Li, Haonan Yan, Zelei Cheng, Wenhai Sun, Hui Zhu
|
Mitigating Query-Flooding Parameter Duplication Attack on Regression
Models with High-Dimensional Gaussian Mechanism
|
it has some mistakes. Since I submitted the paper for the first time,
there were many mistakes in the paper. At the same time, I found a serious
mistake in the content of the paper, so I thought it was inappropriate to
publish it now after careful consideration.
| null | null | null |
cs.CR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Public intelligent services enabled by machine learning algorithms are
vulnerable to model extraction attacks that can steal confidential information
of the learning models through public queries. Differential privacy (DP) has
been considered a promising technique to mitigate this attack. However, we find
that the vulnerability persists when regression models are being protected by
current DP solutions. We show that the adversary can launch a query-flooding
parameter duplication (QPD) attack to infer the model information by repeated
queries.
To defend against the QPD attack on logistic and linear regression models, we
propose a novel High-Dimensional Gaussian (HDG) mechanism to prevent
unauthorized information disclosure without interrupting the intended services.
In contrast to prior work, the proposed HDG mechanism will dynamically generate
the privacy budget and random noise for different queries and their results to
enhance the obfuscation. Besides, for the first time, HDG enables an optimal
privacy budget allocation that automatically determines the minimum amount of
noise to be added per user-desired privacy level on each dimension. We
comprehensively evaluate the performance of HDG using real-world datasets and
shows that HDG effectively mitigates the QPD attack while satisfying the
privacy requirements. We also prepare to open-source the relevant codes to the
community for further research.
|
[
{
"created": "Thu, 6 Feb 2020 01:47:08 GMT",
"version": "v1"
},
{
"created": "Tue, 14 Apr 2020 14:20:42 GMT",
"version": "v2"
},
{
"created": "Sun, 7 Jun 2020 01:40:09 GMT",
"version": "v3"
}
] |
2020-06-09
|
[
[
"Li",
"Xiaoguang",
""
],
[
"Li",
"Hui",
""
],
[
"Yan",
"Haonan",
""
],
[
"Cheng",
"Zelei",
""
],
[
"Sun",
"Wenhai",
""
],
[
"Zhu",
"Hui",
""
]
] |
Public intelligent services enabled by machine learning algorithms are vulnerable to model extraction attacks that can steal confidential information of the learning models through public queries. Differential privacy (DP) has been considered a promising technique to mitigate this attack. However, we find that the vulnerability persists when regression models are being protected by current DP solutions. We show that the adversary can launch a query-flooding parameter duplication (QPD) attack to infer the model information by repeated queries. To defend against the QPD attack on logistic and linear regression models, we propose a novel High-Dimensional Gaussian (HDG) mechanism to prevent unauthorized information disclosure without interrupting the intended services. In contrast to prior work, the proposed HDG mechanism will dynamically generate the privacy budget and random noise for different queries and their results to enhance the obfuscation. Besides, for the first time, HDG enables an optimal privacy budget allocation that automatically determines the minimum amount of noise to be added per user-desired privacy level on each dimension. We comprehensively evaluate the performance of HDG using real-world datasets and shows that HDG effectively mitigates the QPD attack while satisfying the privacy requirements. We also prepare to open-source the relevant codes to the community for further research.
|
2308.01529
|
Judy Fox
|
Navya Annapareddy, Yingzheng Liu, Judy Fox
|
Towards Fair and Privacy Preserving Federated Learning for the
Healthcare Domain
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Federated learning enables data sharing in healthcare contexts where it might
otherwise be difficult due to data-use-ordinances or security and communication
constraints. Distributed and shared data models allow models to become
generalizable and learn from heterogeneous clients. While addressing data
security, privacy, and vulnerability considerations, data itself is not shared
across nodes in a given learning network. On the other hand, FL models often
struggle with variable client data distributions and operate on an assumption
of independent and identically distributed data. As the field has grown, the
notion of fairness-aware federated learning mechanisms has also been introduced
and is of distinct significance to the healthcare domain where many sensitive
groups and protected classes exist. In this paper, we create a benchmark
methodology for FAFL mechanisms under various heterogeneous conditions on
datasets in the healthcare domain typically outside the scope of current
federated learning benchmarks, such as medical imaging and waveform data
formats. Our results indicate considerable variation in how various FAFL
schemes respond to high levels of data heterogeneity. Additionally, doing so
under privacy-preserving conditions can create significant increases in network
communication cost and latency compared to the typical federated learning
scheme.
|
[
{
"created": "Thu, 3 Aug 2023 04:08:06 GMT",
"version": "v1"
}
] |
2023-08-04
|
[
[
"Annapareddy",
"Navya",
""
],
[
"Liu",
"Yingzheng",
""
],
[
"Fox",
"Judy",
""
]
] |
Federated learning enables data sharing in healthcare contexts where it might otherwise be difficult due to data-use-ordinances or security and communication constraints. Distributed and shared data models allow models to become generalizable and learn from heterogeneous clients. While addressing data security, privacy, and vulnerability considerations, data itself is not shared across nodes in a given learning network. On the other hand, FL models often struggle with variable client data distributions and operate on an assumption of independent and identically distributed data. As the field has grown, the notion of fairness-aware federated learning mechanisms has also been introduced and is of distinct significance to the healthcare domain where many sensitive groups and protected classes exist. In this paper, we create a benchmark methodology for FAFL mechanisms under various heterogeneous conditions on datasets in the healthcare domain typically outside the scope of current federated learning benchmarks, such as medical imaging and waveform data formats. Our results indicate considerable variation in how various FAFL schemes respond to high levels of data heterogeneity. Additionally, doing so under privacy-preserving conditions can create significant increases in network communication cost and latency compared to the typical federated learning scheme.
|
2203.16106
|
Marcos Faundez-Zanuy
|
Virginia Espinosa-Dur\'o, Marcos Faundez-Zanuy, Jiri Mekyska
|
Contribution of the Temperature of the Objects to the Problem of Thermal
Imaging Focusing
|
5 pages, published in 2012 IEEE International Carnahan Conference on
Security Technology (ICCST), 15-18 Oct. 2012 Boston (MA) USA. arXiv admin
note: text overlap with arXiv:2203.08513
|
2012 IEEE International Carnahan Conference on Security Technology
(ICCST), 2012, pp. 363-366
|
10.1109/CCST.2012.6393586
| null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
When focusing an image, depth of field, aperture and distance from the camera
to the object, must be taking into account, both, in visible and in infrared
spectrum. Our experiments reveal that in addition, the focusing problem in
thermal spectrum is also hardly dependent of the temperature of the object
itself (and/or the scene).
|
[
{
"created": "Wed, 30 Mar 2022 07:28:13 GMT",
"version": "v1"
}
] |
2022-03-31
|
[
[
"Espinosa-Duró",
"Virginia",
""
],
[
"Faundez-Zanuy",
"Marcos",
""
],
[
"Mekyska",
"Jiri",
""
]
] |
When focusing an image, depth of field, aperture and distance from the camera to the object, must be taking into account, both, in visible and in infrared spectrum. Our experiments reveal that in addition, the focusing problem in thermal spectrum is also hardly dependent of the temperature of the object itself (and/or the scene).
|
2307.15425
|
Arash Hajikhani Dr.
|
Arash Hajikhani, Carolyn Cole
|
A Critical Review of Large Language Models: Sensitivity, Bias, and the
Path Toward Specialized AI
|
17 pages, 6 figures, 6 tables
| null | null |
17
|
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
This paper examines the comparative effectiveness of a specialized compiled
language model and a general-purpose model like OpenAI's GPT-3.5 in detecting
SDGs within text data. It presents a critical review of Large Language Models
(LLMs), addressing challenges related to bias and sensitivity. The necessity of
specialized training for precise, unbiased analysis is underlined. A case study
using a company descriptions dataset offers insight into the differences
between the GPT-3.5 and the specialized SDG detection model. While GPT-3.5
boasts broader coverage, it may identify SDGs with limited relevance to the
companies' activities. In contrast, the specialized model zeroes in on highly
pertinent SDGs. The importance of thoughtful model selection is emphasized,
taking into account task requirements, cost, complexity, and transparency.
Despite the versatility of LLMs, the use of specialized models is suggested for
tasks demanding precision and accuracy. The study concludes by encouraging
further research to find a balance between the capabilities of LLMs and the
need for domain-specific expertise and interpretability.
|
[
{
"created": "Fri, 28 Jul 2023 09:20:22 GMT",
"version": "v1"
}
] |
2023-07-31
|
[
[
"Hajikhani",
"Arash",
""
],
[
"Cole",
"Carolyn",
""
]
] |
This paper examines the comparative effectiveness of a specialized compiled language model and a general-purpose model like OpenAI's GPT-3.5 in detecting SDGs within text data. It presents a critical review of Large Language Models (LLMs), addressing challenges related to bias and sensitivity. The necessity of specialized training for precise, unbiased analysis is underlined. A case study using a company descriptions dataset offers insight into the differences between the GPT-3.5 and the specialized SDG detection model. While GPT-3.5 boasts broader coverage, it may identify SDGs with limited relevance to the companies' activities. In contrast, the specialized model zeroes in on highly pertinent SDGs. The importance of thoughtful model selection is emphasized, taking into account task requirements, cost, complexity, and transparency. Despite the versatility of LLMs, the use of specialized models is suggested for tasks demanding precision and accuracy. The study concludes by encouraging further research to find a balance between the capabilities of LLMs and the need for domain-specific expertise and interpretability.
|
2208.09665
|
Jun Yuan
|
Jun Yuan, Mengchen Liu, Fengyuan Tian, and Shixia Liu
|
Visual Analysis of Neural Architecture Spaces for Summarizing Design
Principles
|
11 pages, 11 figures; accepted for IEEE VIS 2022
| null | null | null |
cs.HC cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Recent advances in artificial intelligence largely benefit from better neural
network architectures. These architectures are a product of a costly process of
trial-and-error. To ease this process, we develop ArchExplorer, a visual
analysis method for understanding a neural architecture space and summarizing
design principles. The key idea behind our method is to make the architecture
space explainable by exploiting structural distances between architectures. We
formulate the pairwise distance calculation as solving an all-pairs shortest
path problem. To improve efficiency, we decompose this problem into a set of
single-source shortest path problems. The time complexity is reduced from
O(kn^2N) to O(knN). Architectures are hierarchically clustered according to the
distances between them. A circle-packing-based architecture visualization has
been developed to convey both the global relationships between clusters and
local neighborhoods of the architectures in each cluster. Two case studies and
a post-analysis are presented to demonstrate the effectiveness of ArchExplorer
in summarizing design principles and selecting better-performing architectures.
|
[
{
"created": "Sat, 20 Aug 2022 12:15:59 GMT",
"version": "v1"
}
] |
2022-08-23
|
[
[
"Yuan",
"Jun",
""
],
[
"Liu",
"Mengchen",
""
],
[
"Tian",
"Fengyuan",
""
],
[
"Liu",
"Shixia",
""
]
] |
Recent advances in artificial intelligence largely benefit from better neural network architectures. These architectures are a product of a costly process of trial-and-error. To ease this process, we develop ArchExplorer, a visual analysis method for understanding a neural architecture space and summarizing design principles. The key idea behind our method is to make the architecture space explainable by exploiting structural distances between architectures. We formulate the pairwise distance calculation as solving an all-pairs shortest path problem. To improve efficiency, we decompose this problem into a set of single-source shortest path problems. The time complexity is reduced from O(kn^2N) to O(knN). Architectures are hierarchically clustered according to the distances between them. A circle-packing-based architecture visualization has been developed to convey both the global relationships between clusters and local neighborhoods of the architectures in each cluster. Two case studies and a post-analysis are presented to demonstrate the effectiveness of ArchExplorer in summarizing design principles and selecting better-performing architectures.
|
2207.05979
|
Shogo Anda
|
Shogo Anda, Masato Kikuchi, Tadachika Ozono
|
Developing a Component Comment Extractor from Product Reviews on
E-Commerce Sites
|
The 14th International Conference on E-Service and Knowledge
Management (ESKM 2022), 6 pages, 6 figures, 5 tables
|
2022 11th International Congress on Advanced Applied Informatics
(IIAI-AAI), pp. 83--88, 2022
|
10.1109/IIAI-AAI55812.2022.00026
| null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Consumers often read product reviews to inform their buying decision, as some
consumers want to know a specific component of a product. However, because
typical sentences on product reviews contain various details, users must
identify sentences about components they want to know amongst the many reviews.
Therefore, we aimed to develop a system that identifies and collects component
and aspect information of products in sentences. Our BERT-based classifiers
assign labels referring to components and aspects to sentences in reviews and
extract sentences with comments on specific components and aspects. We
determined proper labels based for the words identified through pattern
matching from product reviews to create the training data. Because we could not
use the words as labels, we carefully created labels covering the meanings of
the words. However, the training data was imbalanced on component and aspect
pairs. We introduced a data augmentation method using WordNet to reduce the
bias. Our evaluation demonstrates that the system can determine labels for road
bikes using pattern matching, covering more than 88\% of the indicators of
components and aspects on e-commerce sites. Moreover, our data augmentation
method can improve the-F1-measure on insufficient data from 0.66 to 0.76.
|
[
{
"created": "Wed, 13 Jul 2022 06:25:55 GMT",
"version": "v1"
}
] |
2022-07-14
|
[
[
"Anda",
"Shogo",
""
],
[
"Kikuchi",
"Masato",
""
],
[
"Ozono",
"Tadachika",
""
]
] |
Consumers often read product reviews to inform their buying decision, as some consumers want to know a specific component of a product. However, because typical sentences on product reviews contain various details, users must identify sentences about components they want to know amongst the many reviews. Therefore, we aimed to develop a system that identifies and collects component and aspect information of products in sentences. Our BERT-based classifiers assign labels referring to components and aspects to sentences in reviews and extract sentences with comments on specific components and aspects. We determined proper labels based for the words identified through pattern matching from product reviews to create the training data. Because we could not use the words as labels, we carefully created labels covering the meanings of the words. However, the training data was imbalanced on component and aspect pairs. We introduced a data augmentation method using WordNet to reduce the bias. Our evaluation demonstrates that the system can determine labels for road bikes using pattern matching, covering more than 88\% of the indicators of components and aspects on e-commerce sites. Moreover, our data augmentation method can improve the-F1-measure on insufficient data from 0.66 to 0.76.
|
2108.06583
|
Yuan Wu
|
Yuan Wu, Diana Inkpen and Ahmed El-Roby
|
Towards Category and Domain Alignment: Category-Invariant Feature
Enhancement for Adversarial Domain Adaptation
|
10 pages, 4 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Adversarial domain adaptation has made impressive advances in transferring
knowledge from the source domain to the target domain by aligning feature
distributions of both domains. These methods focus on minimizing domain
divergence and regard the adaptability, which is measured as the expected error
of the ideal joint hypothesis on these two domains, as a small constant.
However, these approaches still face two issues: (1) Adversarial domain
alignment distorts the original feature distributions, deteriorating the
adaptability; (2) Transforming feature representations to be domain-invariant
needs to sacrifice domain-specific variations, resulting in weaker
discriminability. In order to alleviate these issues, we propose
category-invariant feature enhancement (CIFE), a general mechanism that
enhances the adversarial domain adaptation through optimizing the adaptability.
Specifically, the CIFE approach introduces category-invariant features to boost
the discriminability of domain-invariant features with preserving the
transferability. Experiments show that the CIFE could improve upon
representative adversarial domain adaptation methods to yield state-of-the-art
results on five benchmarks.
|
[
{
"created": "Sat, 14 Aug 2021 16:51:39 GMT",
"version": "v1"
}
] |
2021-08-17
|
[
[
"Wu",
"Yuan",
""
],
[
"Inkpen",
"Diana",
""
],
[
"El-Roby",
"Ahmed",
""
]
] |
Adversarial domain adaptation has made impressive advances in transferring knowledge from the source domain to the target domain by aligning feature distributions of both domains. These methods focus on minimizing domain divergence and regard the adaptability, which is measured as the expected error of the ideal joint hypothesis on these two domains, as a small constant. However, these approaches still face two issues: (1) Adversarial domain alignment distorts the original feature distributions, deteriorating the adaptability; (2) Transforming feature representations to be domain-invariant needs to sacrifice domain-specific variations, resulting in weaker discriminability. In order to alleviate these issues, we propose category-invariant feature enhancement (CIFE), a general mechanism that enhances the adversarial domain adaptation through optimizing the adaptability. Specifically, the CIFE approach introduces category-invariant features to boost the discriminability of domain-invariant features with preserving the transferability. Experiments show that the CIFE could improve upon representative adversarial domain adaptation methods to yield state-of-the-art results on five benchmarks.
|
2110.15056
|
Jack Millichamp MSc
|
Jack Millichamp, Xi Chen
|
Brain-inspired feature exaggeration in generative replay for continual
learning
|
5 pages, 3 figures, submitted to ICASSP
| null | null | null |
cs.LG cs.AI cs.NE
|
http://creativecommons.org/licenses/by/4.0/
|
The catastrophic forgetting of previously learnt classes is one of the main
obstacles to the successful development of a reliable and accurate generative
continual learning model. When learning new classes, the internal
representation of previously learnt ones can often be overwritten, resulting in
the model's "memory" of earlier classes being lost over time. Recent
developments in neuroscience have uncovered a method through which the brain
avoids its own form of memory interference. Applying a targeted exaggeration of
the differences between features of similar, yet competing memories, the brain
can more easily distinguish and recall them. In this paper, the application of
such exaggeration, via the repulsion of replayed samples belonging to competing
classes, is explored. Through the development of a 'reconstruction repulsion'
loss, this paper presents a new state-of-the-art performance on the
classification of early classes in the class-incremental learning dataset
CIFAR100.
|
[
{
"created": "Tue, 26 Oct 2021 10:49:02 GMT",
"version": "v1"
},
{
"created": "Tue, 23 Nov 2021 13:25:22 GMT",
"version": "v2"
}
] |
2021-11-24
|
[
[
"Millichamp",
"Jack",
""
],
[
"Chen",
"Xi",
""
]
] |
The catastrophic forgetting of previously learnt classes is one of the main obstacles to the successful development of a reliable and accurate generative continual learning model. When learning new classes, the internal representation of previously learnt ones can often be overwritten, resulting in the model's "memory" of earlier classes being lost over time. Recent developments in neuroscience have uncovered a method through which the brain avoids its own form of memory interference. Applying a targeted exaggeration of the differences between features of similar, yet competing memories, the brain can more easily distinguish and recall them. In this paper, the application of such exaggeration, via the repulsion of replayed samples belonging to competing classes, is explored. Through the development of a 'reconstruction repulsion' loss, this paper presents a new state-of-the-art performance on the classification of early classes in the class-incremental learning dataset CIFAR100.
|
2110.12618
|
Xiang Zhang
|
Xiang Zhang, Shiyu Jin, Changhao Wang, Xinghao Zhu, Masayoshi Tomizuka
|
Learning Insertion Primitives with Discrete-Continuous Hybrid Action
Space for Robotic Assembly Tasks
|
Submitted to ICRA 22
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper introduces a discrete-continuous action space to learn insertion
primitives for robotic assembly tasks. Primitive is a sequence of elementary
actions with certain exit conditions, such as "pushing down the peg until
contact". Since the primitive is an abstraction of robot control commands and
encodes human prior knowledge, it reduces the exploration difficulty and yields
better learning efficiency. In this paper, we learn robot assembly skills via
primitives. Specifically, we formulate insertion primitives as parameterized
actions: hybrid actions consisting of discrete primitive types and continuous
primitive parameters. Compared with the previous work using a set of
discretized parameters for each primitive, the agent in our method can freely
choose primitive parameters from a continuous space, which is more flexible and
efficient. To learn these insertion primitives, we propose Twin-Smoothed
Multi-pass Deep Q-Network (TS-MP-DQN), an advanced version of MP-DQN with twin
Q-network to reduce the Q-value over-estimation. Extensive experiments are
conducted in the simulation and real world for validation. From experiment
results, our approach achieves higher success rates than three baselines:
MP-DQN with parameterized actions, primitives with discrete parameters, and
continuous velocity control. Furthermore, learned primitives are robust to
sim-to-real transfer and can generalize to challenging assembly tasks such as
tight round peg-hole and complex shaped electric connectors with promising
success rates. Experiment videos are available at
https://msc.berkeley.edu/research/insertion-primitives.html.
|
[
{
"created": "Mon, 25 Oct 2021 03:08:01 GMT",
"version": "v1"
}
] |
2021-10-26
|
[
[
"Zhang",
"Xiang",
""
],
[
"Jin",
"Shiyu",
""
],
[
"Wang",
"Changhao",
""
],
[
"Zhu",
"Xinghao",
""
],
[
"Tomizuka",
"Masayoshi",
""
]
] |
This paper introduces a discrete-continuous action space to learn insertion primitives for robotic assembly tasks. Primitive is a sequence of elementary actions with certain exit conditions, such as "pushing down the peg until contact". Since the primitive is an abstraction of robot control commands and encodes human prior knowledge, it reduces the exploration difficulty and yields better learning efficiency. In this paper, we learn robot assembly skills via primitives. Specifically, we formulate insertion primitives as parameterized actions: hybrid actions consisting of discrete primitive types and continuous primitive parameters. Compared with the previous work using a set of discretized parameters for each primitive, the agent in our method can freely choose primitive parameters from a continuous space, which is more flexible and efficient. To learn these insertion primitives, we propose Twin-Smoothed Multi-pass Deep Q-Network (TS-MP-DQN), an advanced version of MP-DQN with twin Q-network to reduce the Q-value over-estimation. Extensive experiments are conducted in the simulation and real world for validation. From experiment results, our approach achieves higher success rates than three baselines: MP-DQN with parameterized actions, primitives with discrete parameters, and continuous velocity control. Furthermore, learned primitives are robust to sim-to-real transfer and can generalize to challenging assembly tasks such as tight round peg-hole and complex shaped electric connectors with promising success rates. Experiment videos are available at https://msc.berkeley.edu/research/insertion-primitives.html.
|
2205.03891
|
Bin Zhu
|
Bin Zhu, Chong-Wah Ngo, Jingjing Chen, Wing-Kwong Chan
|
Cross-lingual Adaptation for Recipe Retrieval with Mixup
|
Accepted by ICMR2022
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Cross-modal recipe retrieval has attracted research attention in recent
years, thanks to the availability of large-scale paired data for training.
Nevertheless, obtaining adequate recipe-image pairs covering the majority of
cuisines for supervised learning is difficult if not impossible. By
transferring knowledge learnt from a data-rich cuisine to a data-scarce
cuisine, domain adaptation sheds light on this practical problem. Nevertheless,
existing works assume recipes in source and target domains are mostly
originated from the same cuisine and written in the same language. This paper
studies unsupervised domain adaptation for image-to-recipe retrieval, where
recipes in source and target domains are in different languages. Moreover, only
recipes are available for training in the target domain. A novel recipe mixup
method is proposed to learn transferable embedding features between the two
domains. Specifically, recipe mixup produces mixed recipes to form an
intermediate domain by discretely exchanging the section(s) between source and
target recipes. To bridge the domain gap, recipe mixup loss is proposed to
enforce the intermediate domain to locate in the shortest geodesic path between
source and target domains in the recipe embedding space. By using Recipe 1M
dataset as source domain (English) and Vireo-FoodTransfer dataset as target
domain (Chinese), empirical experiments verify the effectiveness of recipe
mixup for cross-lingual adaptation in the context of image-to-recipe retrieval.
|
[
{
"created": "Sun, 8 May 2022 15:04:39 GMT",
"version": "v1"
}
] |
2022-05-10
|
[
[
"Zhu",
"Bin",
""
],
[
"Ngo",
"Chong-Wah",
""
],
[
"Chen",
"Jingjing",
""
],
[
"Chan",
"Wing-Kwong",
""
]
] |
Cross-modal recipe retrieval has attracted research attention in recent years, thanks to the availability of large-scale paired data for training. Nevertheless, obtaining adequate recipe-image pairs covering the majority of cuisines for supervised learning is difficult if not impossible. By transferring knowledge learnt from a data-rich cuisine to a data-scarce cuisine, domain adaptation sheds light on this practical problem. Nevertheless, existing works assume recipes in source and target domains are mostly originated from the same cuisine and written in the same language. This paper studies unsupervised domain adaptation for image-to-recipe retrieval, where recipes in source and target domains are in different languages. Moreover, only recipes are available for training in the target domain. A novel recipe mixup method is proposed to learn transferable embedding features between the two domains. Specifically, recipe mixup produces mixed recipes to form an intermediate domain by discretely exchanging the section(s) between source and target recipes. To bridge the domain gap, recipe mixup loss is proposed to enforce the intermediate domain to locate in the shortest geodesic path between source and target domains in the recipe embedding space. By using Recipe 1M dataset as source domain (English) and Vireo-FoodTransfer dataset as target domain (Chinese), empirical experiments verify the effectiveness of recipe mixup for cross-lingual adaptation in the context of image-to-recipe retrieval.
|
2208.06692
|
Giuseppe Antonio Di Luna
|
Fiorella Artuso, Marco Mormando, Giuseppe A. Di Luna, Leonardo
Querzoni
|
BinBert: Binary Code Understanding with a Fine-tunable and
Execution-aware Transformer
| null | null | null | null |
cs.CR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A recent trend in binary code analysis promotes the use of neural solutions
based on instruction embedding models. An instruction embedding model is a
neural network that transforms sequences of assembly instructions into
embedding vectors. If the embedding network is trained such that the
translation from code to vectors partially preserves the semantic, the network
effectively represents an assembly code model.
In this paper we present BinBert, a novel assembly code model. BinBert is
built on a transformer pre-trained on a huge dataset of both assembly
instruction sequences and symbolic execution information. BinBert can be
applied to assembly instructions sequences and it is fine-tunable, i.e. it can
be re-trained as part of a neural architecture on task-specific data. Through
fine-tuning, BinBert learns how to apply the general knowledge acquired with
pre-training to the specific task.
We evaluated BinBert on a multi-task benchmark that we specifically designed
to test the understanding of assembly code. The benchmark is composed of
several tasks, some taken from the literature, and a few novel tasks that we
designed, with a mix of intrinsic and downstream tasks.
Our results show that BinBert outperforms state-of-the-art models for binary
instruction embedding, raising the bar for binary code understanding.
|
[
{
"created": "Sat, 13 Aug 2022 17:48:52 GMT",
"version": "v1"
}
] |
2022-08-16
|
[
[
"Artuso",
"Fiorella",
""
],
[
"Mormando",
"Marco",
""
],
[
"Di Luna",
"Giuseppe A.",
""
],
[
"Querzoni",
"Leonardo",
""
]
] |
A recent trend in binary code analysis promotes the use of neural solutions based on instruction embedding models. An instruction embedding model is a neural network that transforms sequences of assembly instructions into embedding vectors. If the embedding network is trained such that the translation from code to vectors partially preserves the semantic, the network effectively represents an assembly code model. In this paper we present BinBert, a novel assembly code model. BinBert is built on a transformer pre-trained on a huge dataset of both assembly instruction sequences and symbolic execution information. BinBert can be applied to assembly instructions sequences and it is fine-tunable, i.e. it can be re-trained as part of a neural architecture on task-specific data. Through fine-tuning, BinBert learns how to apply the general knowledge acquired with pre-training to the specific task. We evaluated BinBert on a multi-task benchmark that we specifically designed to test the understanding of assembly code. The benchmark is composed of several tasks, some taken from the literature, and a few novel tasks that we designed, with a mix of intrinsic and downstream tasks. Our results show that BinBert outperforms state-of-the-art models for binary instruction embedding, raising the bar for binary code understanding.
|
2201.08052
|
Haidong Xie
|
Haidong Xie, Yizhou Xu, Yuanqing Chen, Nan Ji, Shuai Yuan, Naijin Liu,
Xueshuang Xiang
|
Adversarial Jamming for a More Effective Constellation Attack
|
3 pages, 2 figures, published in The 13th International Symposium on
Antennas, Propagation and EM Theory (ISAPE 2021)
| null |
10.1109/ISAPE54070.2021.9753154
| null |
cs.CR eess.SP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The common jamming mode in wireless communication is band barrage jamming,
which is controllable and difficult to resist. Although this method is simple
to implement, it is obviously not the best jamming waveform. Therefore, based
on the idea of adversarial examples, we propose the adversarial jamming
waveform, which can independently optimize and find the best jamming waveform.
We attack QAM with adversarial jamming and find that the optimal jamming
waveform is equivalent to the amplitude and phase between the nearest
constellation points. Furthermore, by verifying the jamming performance on a
hardware platform, it is shown that our method significantly improves the bit
error rate compared to other methods.
|
[
{
"created": "Thu, 20 Jan 2022 08:36:31 GMT",
"version": "v1"
}
] |
2022-12-23
|
[
[
"Xie",
"Haidong",
""
],
[
"Xu",
"Yizhou",
""
],
[
"Chen",
"Yuanqing",
""
],
[
"Ji",
"Nan",
""
],
[
"Yuan",
"Shuai",
""
],
[
"Liu",
"Naijin",
""
],
[
"Xiang",
"Xueshuang",
""
]
] |
The common jamming mode in wireless communication is band barrage jamming, which is controllable and difficult to resist. Although this method is simple to implement, it is obviously not the best jamming waveform. Therefore, based on the idea of adversarial examples, we propose the adversarial jamming waveform, which can independently optimize and find the best jamming waveform. We attack QAM with adversarial jamming and find that the optimal jamming waveform is equivalent to the amplitude and phase between the nearest constellation points. Furthermore, by verifying the jamming performance on a hardware platform, it is shown that our method significantly improves the bit error rate compared to other methods.
|
1901.10812
|
Yehuda Dar
|
Yehuda Dar and Alfred M. Bruckstein
|
Benefiting from Duplicates of Compressed Data: Shift-Based Holographic
Compression of Images
| null | null | null | null |
cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Storage systems often rely on multiple copies of the same compressed data,
enabling recovery in case of binary data errors, of course, at the expense of a
higher storage cost. In this paper we show that a wiser method of duplication
entails great potential benefits for data types tolerating approximate
representations, like images and videos. We propose a method to produce a set
of distinct compressed representations for a given signal, such that any subset
of them allows reconstruction of the signal at a quality depending only on the
number of compressed representations utilized. Essentially, we implement the
holographic representation idea, where all the representations are equally
important in refining the reconstruction. Here we propose to exploit the shift
sensitivity of common compression processes and generate holographic
representations via compression of various shifts of the signal. Two
implementations for the idea, based on standard compression methods, are
presented: the first is a simple, optimization-free design. The second approach
originates in a challenging rate-distortion optimization, mitigated by the
alternating direction method of multipliers (ADMM), leading to a process of
repeatedly applying standard compression techniques. Evaluation of the
approach, in conjunction with the JPEG2000 image compression standard, shows
the effectiveness of the optimization in providing compressed holographic
representations that, by means of an elementary reconstruction process, enable
impressive gains of several dBs in PSNR over exact duplications.
|
[
{
"created": "Wed, 30 Jan 2019 13:23:36 GMT",
"version": "v1"
},
{
"created": "Thu, 7 Feb 2019 18:09:20 GMT",
"version": "v2"
}
] |
2019-02-08
|
[
[
"Dar",
"Yehuda",
""
],
[
"Bruckstein",
"Alfred M.",
""
]
] |
Storage systems often rely on multiple copies of the same compressed data, enabling recovery in case of binary data errors, of course, at the expense of a higher storage cost. In this paper we show that a wiser method of duplication entails great potential benefits for data types tolerating approximate representations, like images and videos. We propose a method to produce a set of distinct compressed representations for a given signal, such that any subset of them allows reconstruction of the signal at a quality depending only on the number of compressed representations utilized. Essentially, we implement the holographic representation idea, where all the representations are equally important in refining the reconstruction. Here we propose to exploit the shift sensitivity of common compression processes and generate holographic representations via compression of various shifts of the signal. Two implementations for the idea, based on standard compression methods, are presented: the first is a simple, optimization-free design. The second approach originates in a challenging rate-distortion optimization, mitigated by the alternating direction method of multipliers (ADMM), leading to a process of repeatedly applying standard compression techniques. Evaluation of the approach, in conjunction with the JPEG2000 image compression standard, shows the effectiveness of the optimization in providing compressed holographic representations that, by means of an elementary reconstruction process, enable impressive gains of several dBs in PSNR over exact duplications.
|
1210.2897
|
Francis J. O'Brien Jr.
|
Francis J. OBrien Jr, Nathan Johnnie, Susan Maloney and Aimee Ross
|
A Proposed General Method for Parameter Estimation of Noise Corrupted
Oscillator Systems
|
33 pages, 9 figures
| null | null | null |
cs.SY physics.data-an
|
http://creativecommons.org/licenses/publicdomain/
|
This paper provides a proposed means to estimate parameters of noise
corrupted oscillator systems. An application for a submarine combat control
systems (CCS) rack is described as exemplary of the method.
|
[
{
"created": "Wed, 10 Oct 2012 16:18:45 GMT",
"version": "v1"
}
] |
2012-10-11
|
[
[
"OBrien",
"Francis J.",
"Jr"
],
[
"Johnnie",
"Nathan",
""
],
[
"Maloney",
"Susan",
""
],
[
"Ross",
"Aimee",
""
]
] |
This paper provides a proposed means to estimate parameters of noise corrupted oscillator systems. An application for a submarine combat control systems (CCS) rack is described as exemplary of the method.
|
2207.02180
|
Mahmood Ahmadi
|
Aladdin Abdulhassan and Mahmood Ahmadi
|
Many-fields Packet Classification Using R-Tree and Field Concatenation
Technique
|
We will revise it and submit it again
| null | null | null |
cs.NI
|
http://creativecommons.org/licenses/by/4.0/
|
Software-defined Networking is an approach that decouples the software-based
control plane from the hardware-based data plane proposed for enterprise
networks; OpenFlow is the most famous flexible protocol that can manage network
traffic between the control and the data plane. Software-Defined Networking
(SDN) requires up to 18 fields of the packets header to be checked against a
big many-fields ruleset to categorize packets into flows, the process of
categorizing packets into flows is called packet classification. Network
switches process all packets belonging to the same flow in a similar manner by
applying the same actions defined in the corresponding rule. Packet
classification facilitates supporting new services such as filtering, blocking
unsafe sites traffic, routing packets based on the packet's header information,
and giving priority to specific flows. High-performance algorithms for
many-field packet classification had been gained much interest in the research
communities. This paper presents a new method to implement the many-fields
packet classification of SDN flow table using Rectangle Tree (R-Tree). In this
method, source and destination IP addresses from each flow table entry have
been converted to a two-dimensional point. The remainders of the rule's fields
have been concatenated into a single field by taking the most important bits
with rules' ID in order to be inserted into the R-tree, for each rule an
effective small binary flag is used to indicate the field's size, type, and
ranges. Subsequently, searching is performed on the rectangle tree to find the
matched rules according to the highest priority. In the simulation using the
class-bench databases, the results show that this method achieves very good
performance, classification speed and reduces the number of memory accesses
significantly.
|
[
{
"created": "Tue, 5 Jul 2022 17:17:50 GMT",
"version": "v1"
},
{
"created": "Tue, 6 Jun 2023 14:47:46 GMT",
"version": "v2"
}
] |
2023-06-07
|
[
[
"Abdulhassan",
"Aladdin",
""
],
[
"Ahmadi",
"Mahmood",
""
]
] |
Software-defined Networking is an approach that decouples the software-based control plane from the hardware-based data plane proposed for enterprise networks; OpenFlow is the most famous flexible protocol that can manage network traffic between the control and the data plane. Software-Defined Networking (SDN) requires up to 18 fields of the packets header to be checked against a big many-fields ruleset to categorize packets into flows, the process of categorizing packets into flows is called packet classification. Network switches process all packets belonging to the same flow in a similar manner by applying the same actions defined in the corresponding rule. Packet classification facilitates supporting new services such as filtering, blocking unsafe sites traffic, routing packets based on the packet's header information, and giving priority to specific flows. High-performance algorithms for many-field packet classification had been gained much interest in the research communities. This paper presents a new method to implement the many-fields packet classification of SDN flow table using Rectangle Tree (R-Tree). In this method, source and destination IP addresses from each flow table entry have been converted to a two-dimensional point. The remainders of the rule's fields have been concatenated into a single field by taking the most important bits with rules' ID in order to be inserted into the R-tree, for each rule an effective small binary flag is used to indicate the field's size, type, and ranges. Subsequently, searching is performed on the rectangle tree to find the matched rules according to the highest priority. In the simulation using the class-bench databases, the results show that this method achieves very good performance, classification speed and reduces the number of memory accesses significantly.
|
2407.03582
|
Andrew Bouras
|
Andrew Bouras
|
Integrating Randomness in Large Language Models: A Linear Congruential
Generator Approach for Generating Clinically Relevant Content
| null | null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Generating diverse, high-quality outputs from language models is crucial for
applications in education and content creation. Achieving true randomness and
avoiding repetition remains a significant challenge. This study uses the Linear
Congruential Generator method for systematic fact selection, combined with
AI-powered content generation. We ensured unique combinations of
gastrointestinal physiology and pathology facts across multiple rounds,
integrating these facts into prompts for GPT-4o to create clinically relevant,
vignette-style outputs. Over 14 rounds, 98 unique outputs were generated,
demonstrating LCG's effectiveness in producing diverse and high-quality
content. This method addresses key issues of randomness and repetition,
enhancing the quality and efficiency of language model-generated content for
various applications.
|
[
{
"created": "Thu, 4 Jul 2024 02:21:47 GMT",
"version": "v1"
}
] |
2024-07-08
|
[
[
"Bouras",
"Andrew",
""
]
] |
Generating diverse, high-quality outputs from language models is crucial for applications in education and content creation. Achieving true randomness and avoiding repetition remains a significant challenge. This study uses the Linear Congruential Generator method for systematic fact selection, combined with AI-powered content generation. We ensured unique combinations of gastrointestinal physiology and pathology facts across multiple rounds, integrating these facts into prompts for GPT-4o to create clinically relevant, vignette-style outputs. Over 14 rounds, 98 unique outputs were generated, demonstrating LCG's effectiveness in producing diverse and high-quality content. This method addresses key issues of randomness and repetition, enhancing the quality and efficiency of language model-generated content for various applications.
|
2311.02093
|
Zhaoxin Chang
|
Zhaoxin Chang and Fusang Zhang and Daqing Zhang
|
An Exploration on Integrated Sensing and Communication for the Future
Smart Internet of Things
| null | null | null | null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Internet of Things (IoT) technologies are the foundation of a fully connected
world. Currently, IoT devices (or nodes) primarily use dedicated sensors to
sense and collect data at large scales, and then transmit the data to target
nodes or gateways through wireless communication for further processing and
analytics. In recent years, research efforts have been made to explore the
feasibility of using wireless communication for sensing (while assiduously
improving the transmission performance of wireless signals), in an attempt to
achieve integrated sensing and communication (ISAC) for smart IoT of the
future. In this paper, we leverage the capabilities of LoRa, a long-range IoT
communication technology, to explore the possibility of using LoRa signals for
both sensing and communication. Based on LoRa, we propose ISAC designs in two
typical scenarios of smart IoT, and verify the feasibility and effectiveness of
our designs in soil moisture monitoring and human presence detection.
|
[
{
"created": "Fri, 27 Oct 2023 19:03:10 GMT",
"version": "v1"
}
] |
2023-11-07
|
[
[
"Chang",
"Zhaoxin",
""
],
[
"Zhang",
"Fusang",
""
],
[
"Zhang",
"Daqing",
""
]
] |
Internet of Things (IoT) technologies are the foundation of a fully connected world. Currently, IoT devices (or nodes) primarily use dedicated sensors to sense and collect data at large scales, and then transmit the data to target nodes or gateways through wireless communication for further processing and analytics. In recent years, research efforts have been made to explore the feasibility of using wireless communication for sensing (while assiduously improving the transmission performance of wireless signals), in an attempt to achieve integrated sensing and communication (ISAC) for smart IoT of the future. In this paper, we leverage the capabilities of LoRa, a long-range IoT communication technology, to explore the possibility of using LoRa signals for both sensing and communication. Based on LoRa, we propose ISAC designs in two typical scenarios of smart IoT, and verify the feasibility and effectiveness of our designs in soil moisture monitoring and human presence detection.
|
2111.14366
|
David Lovell
|
David Lovell, Kellie Vella, Diego Mu\~noz, Matt McKague, Margot
Brereton and Peter Ellis
|
Exploring technologies to better link physical evidence and digital
information for disaster victim identification
|
27 pages, 2 figures
|
Forensic Sciences Research 2022
|
10.1080/20961790.2021.2023418
| null |
cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
Disaster victim identification (DVI) entails a protracted process of evidence
collection and data matching to reconcile physical remains with victim
identity. Technology is critical to DVI by enabling the linkage of physical
evidence to information. However, labelling physical remains and collecting
data at the scene are dominated by low-technology paper-based practices. We
ask, how can technology help us tag and track the victims of disaster? Our
response has two parts. First, we conducted a human-computer interaction led
investigation into the systematic factors impacting DVI tagging and tracking
processes. Through interviews with Australian DVI practitioners, we explored
how technologies to improve linkage might fit with prevailing work practices
and preferences; practical and social considerations; and existing systems and
processes. Using insights from these interviews and relevant literature, we
identified four critical themes: protocols and training; stress and stressors;
the plurality of information capture and management systems; and practicalities
and constraints. Second, we applied the themes identified in the first part of
the investigation to critically review technologies that could support DVI
practitioners by enhancing DVI processes that link physical evidence to
information. This resulted in an overview of candidate technologies matched
with consideration of their key attributes. This study recognises the
importance of considering human factors that can affect technology adoption
into existing practices. We provide a searchable table (Supplementary
Information) that relates technologies to the key attributes relevant to DVI
practice, for the reader to apply to their own context. While this research
directly contributes to DVI, it also has applications to other domains in which
a physical/digital linkage is required, particularly within high-stress
environments.
|
[
{
"created": "Mon, 29 Nov 2021 07:46:56 GMT",
"version": "v1"
}
] |
2022-06-07
|
[
[
"Lovell",
"David",
""
],
[
"Vella",
"Kellie",
""
],
[
"Muñoz",
"Diego",
""
],
[
"McKague",
"Matt",
""
],
[
"Brereton",
"Margot",
""
],
[
"Ellis",
"Peter",
""
]
] |
Disaster victim identification (DVI) entails a protracted process of evidence collection and data matching to reconcile physical remains with victim identity. Technology is critical to DVI by enabling the linkage of physical evidence to information. However, labelling physical remains and collecting data at the scene are dominated by low-technology paper-based practices. We ask, how can technology help us tag and track the victims of disaster? Our response has two parts. First, we conducted a human-computer interaction led investigation into the systematic factors impacting DVI tagging and tracking processes. Through interviews with Australian DVI practitioners, we explored how technologies to improve linkage might fit with prevailing work practices and preferences; practical and social considerations; and existing systems and processes. Using insights from these interviews and relevant literature, we identified four critical themes: protocols and training; stress and stressors; the plurality of information capture and management systems; and practicalities and constraints. Second, we applied the themes identified in the first part of the investigation to critically review technologies that could support DVI practitioners by enhancing DVI processes that link physical evidence to information. This resulted in an overview of candidate technologies matched with consideration of their key attributes. This study recognises the importance of considering human factors that can affect technology adoption into existing practices. We provide a searchable table (Supplementary Information) that relates technologies to the key attributes relevant to DVI practice, for the reader to apply to their own context. While this research directly contributes to DVI, it also has applications to other domains in which a physical/digital linkage is required, particularly within high-stress environments.
|
1708.02383
|
Meng Fang
|
Meng Fang, Yuan Li and Trevor Cohn
|
Learning how to Active Learn: A Deep Reinforcement Learning Approach
|
To appear in EMNLP 2017
| null | null | null |
cs.CL cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Active learning aims to select a small subset of data for annotation such
that a classifier learned on the data is highly accurate. This is usually done
using heuristic selection methods, however the effectiveness of such methods is
limited and moreover, the performance of heuristics varies between datasets. To
address these shortcomings, we introduce a novel formulation by reframing the
active learning as a reinforcement learning problem and explicitly learning a
data selection policy, where the policy takes the role of the active learning
heuristic. Importantly, our method allows the selection policy learned using
simulation on one language to be transferred to other languages. We demonstrate
our method using cross-lingual named entity recognition, observing uniform
improvements over traditional active learning.
|
[
{
"created": "Tue, 8 Aug 2017 07:06:48 GMT",
"version": "v1"
}
] |
2017-08-09
|
[
[
"Fang",
"Meng",
""
],
[
"Li",
"Yuan",
""
],
[
"Cohn",
"Trevor",
""
]
] |
Active learning aims to select a small subset of data for annotation such that a classifier learned on the data is highly accurate. This is usually done using heuristic selection methods, however the effectiveness of such methods is limited and moreover, the performance of heuristics varies between datasets. To address these shortcomings, we introduce a novel formulation by reframing the active learning as a reinforcement learning problem and explicitly learning a data selection policy, where the policy takes the role of the active learning heuristic. Importantly, our method allows the selection policy learned using simulation on one language to be transferred to other languages. We demonstrate our method using cross-lingual named entity recognition, observing uniform improvements over traditional active learning.
|
2208.00565
|
Maia Stiber
|
Maia Stiber and Russell Taylor and Chien-Ming Huang
|
Modeling Human Response to Robot Errors for Timely Error Detection
|
Accepted to 2022 International Conference on Intelligent Robots and
Systems (IROS), 8 pages, 6 figures
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
In human-robot collaboration, robot errors are inevitable -- damaging user
trust, willingness to work together, and task performance. Prior work has shown
that people naturally respond to robot errors socially and that in social
interactions it is possible to use human responses to detect errors. However,
there is little exploration in the domain of non-social, physical human-robot
collaboration such as assembly and tool retrieval. In this work, we investigate
how people's organic, social responses to robot errors may be used to enable
timely automatic detection of errors in physical human-robot interactions. We
conducted a data collection study to obtain facial responses to train a
real-time detection algorithm and a case study to explore the generalizability
of our method with different task settings and errors. Our results show that
natural social responses are effective signals for timely detection and
localization of robot errors even in non-social contexts and that our method is
robust across a variety of task contexts, robot errors, and user responses.
This work contributes to robust error detection without detailed task
specifications.
|
[
{
"created": "Mon, 1 Aug 2022 01:55:31 GMT",
"version": "v1"
}
] |
2022-08-02
|
[
[
"Stiber",
"Maia",
""
],
[
"Taylor",
"Russell",
""
],
[
"Huang",
"Chien-Ming",
""
]
] |
In human-robot collaboration, robot errors are inevitable -- damaging user trust, willingness to work together, and task performance. Prior work has shown that people naturally respond to robot errors socially and that in social interactions it is possible to use human responses to detect errors. However, there is little exploration in the domain of non-social, physical human-robot collaboration such as assembly and tool retrieval. In this work, we investigate how people's organic, social responses to robot errors may be used to enable timely automatic detection of errors in physical human-robot interactions. We conducted a data collection study to obtain facial responses to train a real-time detection algorithm and a case study to explore the generalizability of our method with different task settings and errors. Our results show that natural social responses are effective signals for timely detection and localization of robot errors even in non-social contexts and that our method is robust across a variety of task contexts, robot errors, and user responses. This work contributes to robust error detection without detailed task specifications.
|
1303.0594
|
Dionysios Kalogerias
|
Dionysios S. Kalogerias and Athina P. Petropulu
|
On the Coherence Properties of Random Euclidean Distance Matrices
|
5 pages, SPAWC 2013
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In the present paper we focus on the coherence properties of general random
Euclidean distance matrices, which are very closely related to the respective
matrix completion problem. This problem is of great interest in several
applications such as node localization in sensor networks with limited
connectivity. Our results can directly provide the sufficient conditions under
which an EDM can be successfully recovered with high probability from a limited
number of measurements.
|
[
{
"created": "Mon, 4 Mar 2013 03:52:16 GMT",
"version": "v1"
},
{
"created": "Sat, 11 May 2013 18:36:36 GMT",
"version": "v2"
}
] |
2013-05-14
|
[
[
"Kalogerias",
"Dionysios S.",
""
],
[
"Petropulu",
"Athina P.",
""
]
] |
In the present paper we focus on the coherence properties of general random Euclidean distance matrices, which are very closely related to the respective matrix completion problem. This problem is of great interest in several applications such as node localization in sensor networks with limited connectivity. Our results can directly provide the sufficient conditions under which an EDM can be successfully recovered with high probability from a limited number of measurements.
|
2308.13841
|
Wanrong He
|
Wanrong He, Mitchell L. Gordon, Lindsay Popowski, Michael S. Bernstein
|
Cura: Curation at Social Media Scale
|
CSCW 2023
| null |
10.1145/3610186
| null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
How can online communities execute a focused vision for their space? Curation
offers one approach, where community leaders manually select content to share
with the community. Curation enables leaders to shape a space that matches
their taste, norms, and values, but the practice is often intractable at social
media scale: curators cannot realistically sift through hundreds or thousands
of submissions daily. In this paper, we contribute algorithmic and interface
foundations enabling curation at scale, and manifest these foundations in a
system called Cura. Our approach draws on the observation that, while curators'
attention is limited, other community members' upvotes are plentiful and
informative of curators' likely opinions. We thus contribute a
transformer-based curation model that predicts whether each curator will upvote
a post based on previous community upvotes. Cura applies this curation model to
create a feed of content that it predicts the curator would want in the
community. Evaluations demonstrate that the curation model accurately estimates
opinions of diverse curators, that changing curators for a community results in
clearly recognizable shifts in the community's content, and that, consequently,
curation can reduce anti-social behavior by half without extra moderation
effort. By sampling different types of curators, Cura lowers the threshold to
genres of curated social media ranging from editorial groups to stakeholder
roundtables to democracies.
|
[
{
"created": "Sat, 26 Aug 2023 10:25:05 GMT",
"version": "v1"
}
] |
2023-08-29
|
[
[
"He",
"Wanrong",
""
],
[
"Gordon",
"Mitchell L.",
""
],
[
"Popowski",
"Lindsay",
""
],
[
"Bernstein",
"Michael S.",
""
]
] |
How can online communities execute a focused vision for their space? Curation offers one approach, where community leaders manually select content to share with the community. Curation enables leaders to shape a space that matches their taste, norms, and values, but the practice is often intractable at social media scale: curators cannot realistically sift through hundreds or thousands of submissions daily. In this paper, we contribute algorithmic and interface foundations enabling curation at scale, and manifest these foundations in a system called Cura. Our approach draws on the observation that, while curators' attention is limited, other community members' upvotes are plentiful and informative of curators' likely opinions. We thus contribute a transformer-based curation model that predicts whether each curator will upvote a post based on previous community upvotes. Cura applies this curation model to create a feed of content that it predicts the curator would want in the community. Evaluations demonstrate that the curation model accurately estimates opinions of diverse curators, that changing curators for a community results in clearly recognizable shifts in the community's content, and that, consequently, curation can reduce anti-social behavior by half without extra moderation effort. By sampling different types of curators, Cura lowers the threshold to genres of curated social media ranging from editorial groups to stakeholder roundtables to democracies.
|
2407.16248
|
Xiaowan Hu
|
Xiaowan Hu, Yiyi Chen, Yan Li, Minquan Wang, Haoqian Wang, Quan Chen,
Han Li, Peng Jiang
|
Spatiotemporal Graph Guided Multi-modal Network for Livestreaming
Product Retrieval
|
16 pages, 12 figures
| null | null | null |
cs.CV cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With the rapid expansion of e-commerce, more consumers have become accustomed
to making purchases via livestreaming. Accurately identifying the products
being sold by salespeople, i.e., livestreaming product retrieval (LPR), poses a
fundamental and daunting challenge. The LPR task encompasses three primary
dilemmas in real-world scenarios: 1) the recognition of intended products from
distractor products present in the background; 2) the video-image heterogeneity
that the appearance of products showcased in live streams often deviates
substantially from standardized product images in stores; 3) there are numerous
confusing products with subtle visual nuances in the shop. To tackle these
challenges, we propose the Spatiotemporal Graphing Multi-modal Network (SGMN).
First, we employ a text-guided attention mechanism that leverages the spoken
content of salespeople to guide the model to focus toward intended products,
emphasizing their salience over cluttered background products. Second, a
long-range spatiotemporal graph network is further designed to achieve both
instance-level interaction and frame-level matching, solving the misalignment
caused by video-image heterogeneity. Third, we propose a multi-modal hard
example mining, assisting the model in distinguishing highly similar products
with fine-grained features across the video-image-text domain. Through
extensive quantitative and qualitative experiments, we demonstrate the superior
performance of our proposed SGMN model, surpassing the state-of-the-art methods
by a substantial margin. The code is available at
https://github.com/Huxiaowan/SGMN.
|
[
{
"created": "Tue, 23 Jul 2024 07:36:54 GMT",
"version": "v1"
},
{
"created": "Wed, 24 Jul 2024 05:56:55 GMT",
"version": "v2"
},
{
"created": "Mon, 5 Aug 2024 09:05:59 GMT",
"version": "v3"
}
] |
2024-08-06
|
[
[
"Hu",
"Xiaowan",
""
],
[
"Chen",
"Yiyi",
""
],
[
"Li",
"Yan",
""
],
[
"Wang",
"Minquan",
""
],
[
"Wang",
"Haoqian",
""
],
[
"Chen",
"Quan",
""
],
[
"Li",
"Han",
""
],
[
"Jiang",
"Peng",
""
]
] |
With the rapid expansion of e-commerce, more consumers have become accustomed to making purchases via livestreaming. Accurately identifying the products being sold by salespeople, i.e., livestreaming product retrieval (LPR), poses a fundamental and daunting challenge. The LPR task encompasses three primary dilemmas in real-world scenarios: 1) the recognition of intended products from distractor products present in the background; 2) the video-image heterogeneity that the appearance of products showcased in live streams often deviates substantially from standardized product images in stores; 3) there are numerous confusing products with subtle visual nuances in the shop. To tackle these challenges, we propose the Spatiotemporal Graphing Multi-modal Network (SGMN). First, we employ a text-guided attention mechanism that leverages the spoken content of salespeople to guide the model to focus toward intended products, emphasizing their salience over cluttered background products. Second, a long-range spatiotemporal graph network is further designed to achieve both instance-level interaction and frame-level matching, solving the misalignment caused by video-image heterogeneity. Third, we propose a multi-modal hard example mining, assisting the model in distinguishing highly similar products with fine-grained features across the video-image-text domain. Through extensive quantitative and qualitative experiments, we demonstrate the superior performance of our proposed SGMN model, surpassing the state-of-the-art methods by a substantial margin. The code is available at https://github.com/Huxiaowan/SGMN.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.