id stringlengths 9 10 | submitter stringlengths 1 64 ⌀ | authors stringlengths 4 20.7k | title stringlengths 4 246 | comments stringlengths 1 523 ⌀ | journal-ref stringlengths 4 404 ⌀ | doi stringlengths 11 153 ⌀ | report-no stringlengths 2 254 ⌀ | categories stringlengths 5 98 | license stringclasses 9 values | orig_abstract stringlengths 14 3.35k | versions listlengths 1 60 | update_date stringlengths 10 10 | authors_parsed listlengths 1 1.35k | abstract stringlengths 11 3.34k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2306.08403 | Oumaima El Khettari | Oumaima El Khettari, Solen Quiniou, Samuel Chaffron | Building a Corpus for Biomedical Relation Extraction of Species Mentions | Accepted in BioNLP@ACL 2023 | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | We present a manually annotated corpus, Species-Species Interaction, for
extracting meaningful binary relations between species, in biomedical texts, at
sentence level, with a focus on the gut microbiota. The corpus leverages
PubTator to annotate species in full-text articles after evaluating different
Named Entity Recognition species taggers. Our first results are promising for
extracting relations between species using BERT and its biomedical variants.
| [
{
"created": "Wed, 14 Jun 2023 09:56:32 GMT",
"version": "v1"
}
] | 2023-06-16 | [
[
"Khettari",
"Oumaima El",
""
],
[
"Quiniou",
"Solen",
""
],
[
"Chaffron",
"Samuel",
""
]
] | We present a manually annotated corpus, Species-Species Interaction, for extracting meaningful binary relations between species, in biomedical texts, at sentence level, with a focus on the gut microbiota. The corpus leverages PubTator to annotate species in full-text articles after evaluating different Named Entity Recognition species taggers. Our first results are promising for extracting relations between species using BERT and its biomedical variants. |
2310.03702 | Samuel Taggart | Jason Hartline, Darrell Hoy, and Samuel Taggart | Robust Analysis of Auction Equilibria | This paper provides an economic interpretation on results presented
in an extended abstract under the title "Price of Anarchy for Auction
Revenue" at the fifteenth ACM Conference on Economics and Computation | null | null | null | cs.GT | http://creativecommons.org/licenses/by/4.0/ | Equilibria in auctions can be very difficult to analyze, beyond the symmetric
environments where revenue equivalence renders the analysis straightforward.
This paper takes a robust approach to evaluating the equilibria of auctions.
Rather than identify the equilibria of an auction under specific environmental
conditions, it considers worst-case analysis, where an auction is evaluated
according to the worst environment and worst equilibrium in that environment.
It identifies a non-equilibrium property of auctions that governs whether or
not their worst-case equilibria are good for welfare and revenue. This property
is easy to analyze, can be refined from data, and composes across markets where
multiple auctions are run simultaneously.
| [
{
"created": "Thu, 5 Oct 2023 17:23:09 GMT",
"version": "v1"
}
] | 2023-10-06 | [
[
"Hartline",
"Jason",
""
],
[
"Hoy",
"Darrell",
""
],
[
"Taggart",
"Samuel",
""
]
] | Equilibria in auctions can be very difficult to analyze, beyond the symmetric environments where revenue equivalence renders the analysis straightforward. This paper takes a robust approach to evaluating the equilibria of auctions. Rather than identify the equilibria of an auction under specific environmental conditions, it considers worst-case analysis, where an auction is evaluated according to the worst environment and worst equilibrium in that environment. It identifies a non-equilibrium property of auctions that governs whether or not their worst-case equilibria are good for welfare and revenue. This property is easy to analyze, can be refined from data, and composes across markets where multiple auctions are run simultaneously. |
1709.02285 | Darius Burschka | Darius Burschka | Monocular Navigation in Large Scale Dynamic Environments | 2017 British Machine Vision Conference, London (BMVC 2017) | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a processing technique for a robust reconstruction of motion
properties for single points in large scale, dynamic environments. We assume
that the acquisition camera is moving and that there are other independently
moving agents in a large environment, like road scenarios. The separation of
direction and magnitude of the reconstructed motion allows for robust
reconstruction of the dynamic state of the objects in situations, where
conventional binocular systems fail due to a small signal (disparity) from the
images due to a constant detection error, and where structure from motion
approaches fail due to unobserved motion of other agents between the camera
frames.
We present the mathematical framework and the sensitivity analysis for the
resulting system.
| [
{
"created": "Thu, 7 Sep 2017 14:46:45 GMT",
"version": "v1"
}
] | 2017-09-08 | [
[
"Burschka",
"Darius",
""
]
] | We present a processing technique for a robust reconstruction of motion properties for single points in large scale, dynamic environments. We assume that the acquisition camera is moving and that there are other independently moving agents in a large environment, like road scenarios. The separation of direction and magnitude of the reconstructed motion allows for robust reconstruction of the dynamic state of the objects in situations, where conventional binocular systems fail due to a small signal (disparity) from the images due to a constant detection error, and where structure from motion approaches fail due to unobserved motion of other agents between the camera frames. We present the mathematical framework and the sensitivity analysis for the resulting system. |
1211.6988 | Florian Meyer | Florian Meyer, Erwin Riegler, Ondrej Hlinka, and Franz Hlawatsch | Simultaneous Distributed Sensor Self-Localization and Target Tracking
Using Belief Propagation and Likelihood Consensus | 10 pages, 5 figures | null | null | null | cs.NI cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce the framework of cooperative simultaneous localization and
tracking (CoSLAT), which provides a consistent combination of cooperative
self-localization (CSL) and distributed target tracking (DTT) in sensor
networks without a fusion center. CoSLAT extends simultaneous localization and
tracking (SLAT) in that it uses also intersensor measurements. Starting from a
factor graph formulation of the CoSLAT problem, we develop a particle-based,
distributed message passing algorithm for CoSLAT that combines nonparametric
belief propagation with the likelihood consensus scheme. The proposed CoSLAT
algorithm improves on state-of-the-art CSL and DTT algorithms by exchanging
probabilistic information between CSL and DTT. Simulation results demonstrate
substantial improvements in both self-localization and tracking performance.
| [
{
"created": "Thu, 29 Nov 2012 17:14:55 GMT",
"version": "v1"
}
] | 2012-11-30 | [
[
"Meyer",
"Florian",
""
],
[
"Riegler",
"Erwin",
""
],
[
"Hlinka",
"Ondrej",
""
],
[
"Hlawatsch",
"Franz",
""
]
] | We introduce the framework of cooperative simultaneous localization and tracking (CoSLAT), which provides a consistent combination of cooperative self-localization (CSL) and distributed target tracking (DTT) in sensor networks without a fusion center. CoSLAT extends simultaneous localization and tracking (SLAT) in that it uses also intersensor measurements. Starting from a factor graph formulation of the CoSLAT problem, we develop a particle-based, distributed message passing algorithm for CoSLAT that combines nonparametric belief propagation with the likelihood consensus scheme. The proposed CoSLAT algorithm improves on state-of-the-art CSL and DTT algorithms by exchanging probabilistic information between CSL and DTT. Simulation results demonstrate substantial improvements in both self-localization and tracking performance. |
1406.1528 | Dustin Lang | Dustin Lang, David W. Hogg, and Bernhard Scholkopf | Towards building a Crowd-Sourced Sky Map | Appeared at AI-STATS 2014 | JMLR Workshop and Conference Proceedings, 33 (AI & Statistics
2014), 549 | null | null | cs.CV astro-ph.IM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We describe a system that builds a high dynamic-range and wide-angle image of
the night sky by combining a large set of input images. The method makes use of
pixel-rank information in the individual input images to improve a "consensus"
pixel rank in the combined image. Because it only makes use of ranks and the
complexity of the algorithm is linear in the number of images, the method is
useful for large sets of uncalibrated images that might have undergone unknown
non-linear tone mapping transformations for visualization or aesthetic reasons.
We apply the method to images of the night sky (of unknown provenance)
discovered on the Web. The method permits discovery of astronomical objects or
features that are not visible in any of the input images taken individually.
More importantly, however, it permits scientific exploitation of a huge source
of astronomical images that would not be available to astronomical research
without our automatic system.
| [
{
"created": "Thu, 5 Jun 2014 21:18:44 GMT",
"version": "v1"
}
] | 2014-06-09 | [
[
"Lang",
"Dustin",
""
],
[
"Hogg",
"David W.",
""
],
[
"Scholkopf",
"Bernhard",
""
]
] | We describe a system that builds a high dynamic-range and wide-angle image of the night sky by combining a large set of input images. The method makes use of pixel-rank information in the individual input images to improve a "consensus" pixel rank in the combined image. Because it only makes use of ranks and the complexity of the algorithm is linear in the number of images, the method is useful for large sets of uncalibrated images that might have undergone unknown non-linear tone mapping transformations for visualization or aesthetic reasons. We apply the method to images of the night sky (of unknown provenance) discovered on the Web. The method permits discovery of astronomical objects or features that are not visible in any of the input images taken individually. More importantly, however, it permits scientific exploitation of a huge source of astronomical images that would not be available to astronomical research without our automatic system. |
1806.02366 | Alex James Dr | Kamilya Smagulova and Kazybek Adam and Olga Krestinskaya and Alex
Pappachen James | Design of CMOS-memristor Circuits for LSTM architecture | null | IEEE International Conferences on Electron Devices and Solid-State
Circuits, 2018 | null | null | cs.ET | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Long Short-Term memory (LSTM) architecture is a well-known approach for
building recurrent neural networks (RNN) useful in sequential processing of
data in application to natural language processing. The near-sensor hardware
implementation of LSTM is challenged due to large parallelism and complexity.
We propose a 0.18 m CMOS, GST memristor LSTM hardware architecture for
near-sensor processing. The proposed system is validated in a forecasting
problem based on Keras model.
| [
{
"created": "Wed, 6 Jun 2018 18:14:59 GMT",
"version": "v1"
}
] | 2018-06-08 | [
[
"Smagulova",
"Kamilya",
""
],
[
"Adam",
"Kazybek",
""
],
[
"Krestinskaya",
"Olga",
""
],
[
"James",
"Alex Pappachen",
""
]
] | Long Short-Term memory (LSTM) architecture is a well-known approach for building recurrent neural networks (RNN) useful in sequential processing of data in application to natural language processing. The near-sensor hardware implementation of LSTM is challenged due to large parallelism and complexity. We propose a 0.18 m CMOS, GST memristor LSTM hardware architecture for near-sensor processing. The proposed system is validated in a forecasting problem based on Keras model. |
1603.02130 | Jing (Janet) Liu | Jing Liu and John D. Backes and Darren Cofer and Andrew Gacek | From Design Contracts to Component Requirements Verification | 15 pages, 2 figures, conference submission | null | null | null | cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | During the development and verification of complex airborne systems, a
variety of languages and development environments are used for different levels
of the system hierarchy. As a result, there may be manual steps to translate
requirements between these different environments. This paper presents a
tool-supported export technique that translates high-level requirements from
the software architecture modeling environment into observers of requirements
that can be used for verification in the software component environment. This
allows efficient verification that the component designs comply with their
high-level requirements. It also provides an automated tool chain supporting
formal verification from system requirements down to low-level software
requirements that is consistent with certification guidance for avionics
systems. The effectiveness of the technique has been evaluated and demonstrated
on a medical infusion pump and an aircraft wheel braking system.
| [
{
"created": "Mon, 7 Mar 2016 16:09:42 GMT",
"version": "v1"
},
{
"created": "Fri, 22 Apr 2016 21:28:19 GMT",
"version": "v2"
}
] | 2016-04-26 | [
[
"Liu",
"Jing",
""
],
[
"Backes",
"John D.",
""
],
[
"Cofer",
"Darren",
""
],
[
"Gacek",
"Andrew",
""
]
] | During the development and verification of complex airborne systems, a variety of languages and development environments are used for different levels of the system hierarchy. As a result, there may be manual steps to translate requirements between these different environments. This paper presents a tool-supported export technique that translates high-level requirements from the software architecture modeling environment into observers of requirements that can be used for verification in the software component environment. This allows efficient verification that the component designs comply with their high-level requirements. It also provides an automated tool chain supporting formal verification from system requirements down to low-level software requirements that is consistent with certification guidance for avionics systems. The effectiveness of the technique has been evaluated and demonstrated on a medical infusion pump and an aircraft wheel braking system. |
2304.10664 | Miriam J\"ager | Miriam J\"ager, Patrick H\"ubner, Dennis Haitz, Boris Jutzi | A Comparative Neural Radiance Field (NeRF) 3D Analysis of Camera Poses
from HoloLens Trajectories and Structure from Motion | 7 pages, 5 figures. Will be published in the ISPRS The International
Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Neural Radiance Fields (NeRFs) are trained using a set of camera poses and
associated images as input to estimate density and color values for each
position. The position-dependent density learning is of particular interest for
photogrammetry, enabling 3D reconstruction by querying and filtering the NeRF
coordinate system based on the object density. While traditional methods like
Structure from Motion are commonly used for camera pose calculation in
pre-processing for NeRFs, the HoloLens offers an interesting interface for
extracting the required input data directly. We present a workflow for
high-resolution 3D reconstructions almost directly from HoloLens data using
NeRFs. Thereby, different investigations are considered: Internal camera poses
from the HoloLens trajectory via a server application, and external camera
poses from Structure from Motion, both with an enhanced variant applied through
pose refinement. Results show that the internal camera poses lead to NeRF
convergence with a PSNR of 25\,dB with a simple rotation around the x-axis and
enable a 3D reconstruction. Pose refinement enables comparable quality compared
to external camera poses, resulting in improved training process with a PSNR of
27\,dB and a better 3D reconstruction. Overall, NeRF reconstructions outperform
the conventional photogrammetric dense reconstruction using Multi-View Stereo
in terms of completeness and level of detail.
| [
{
"created": "Thu, 20 Apr 2023 22:17:28 GMT",
"version": "v1"
}
] | 2023-04-24 | [
[
"Jäger",
"Miriam",
""
],
[
"Hübner",
"Patrick",
""
],
[
"Haitz",
"Dennis",
""
],
[
"Jutzi",
"Boris",
""
]
] | Neural Radiance Fields (NeRFs) are trained using a set of camera poses and associated images as input to estimate density and color values for each position. The position-dependent density learning is of particular interest for photogrammetry, enabling 3D reconstruction by querying and filtering the NeRF coordinate system based on the object density. While traditional methods like Structure from Motion are commonly used for camera pose calculation in pre-processing for NeRFs, the HoloLens offers an interesting interface for extracting the required input data directly. We present a workflow for high-resolution 3D reconstructions almost directly from HoloLens data using NeRFs. Thereby, different investigations are considered: Internal camera poses from the HoloLens trajectory via a server application, and external camera poses from Structure from Motion, both with an enhanced variant applied through pose refinement. Results show that the internal camera poses lead to NeRF convergence with a PSNR of 25\,dB with a simple rotation around the x-axis and enable a 3D reconstruction. Pose refinement enables comparable quality compared to external camera poses, resulting in improved training process with a PSNR of 27\,dB and a better 3D reconstruction. Overall, NeRF reconstructions outperform the conventional photogrammetric dense reconstruction using Multi-View Stereo in terms of completeness and level of detail. |
1805.00976 | Saba Ahmadian | Saba Ahmadian, Onur Mutlu, and Hossein Asadi | ECI-Cache: A High-Endurance and Cost-Efficient I/O Caching Scheme for
Virtualized Platforms | null | Proceedings of the ACM on Measurement and Analysis of Computing
Systems 2.1 (2018): 9 | 10.1145/3179412 | null | cs.AR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In recent years, high interest in using Virtual Machines (VMs) in data
centers and Cloud computing has significantly increased the demand for
high-performance data storage systems. Recent studies suggest using SSDs as a
caching layer for HDD-based storage subsystems in virtualization platforms.
Such studies neglect to address the endurance and cost of SSDs, which can
significantly affect the efficiency of I/O caching. Moreover, previous studies
only configure the cache size to provide the required performance level for
each VM, while neglecting other important parameters such as cache write policy
and request type, which can adversely affect both performance-per-cost and
endurance.
In this paper, we present a new high-Endurance and Cost-efficient I/O Caching
(ECI-Cache) scheme for virtualized platforms, which can significantly improve
both the performance-per-cost and endurance of storage subsystems as opposed to
previously proposed I/O caching schemes. Unlike traditional I/O caching schemes
which allocate cache size only based on reuse distance of accesses, we propose
a new metric, Useful Reuse Distance (URD), which considers the request type in
reuse distance calculation, resulting in improved performance-per-cost and
endurance for the SSD cache. Via online characterization of workloads and using
URD, ECI-Cache partitions the SSD cache across VMs and is able to dynamically
adjust the cache size and write policy for each VM. To evaluate the proposed
scheme, we have implemented ECI-Cache in an open source hypervisor, QEMU
(version 2.8.0), on a server running the CentOS 7 operating system (kernel
version 3.10.0-327). Experimental results show that our proposed scheme
improves the performance, performance-per-cost, and endurance of the SSD cache
by 17%, 30% and 65%, respectively, compared to the state-of-the-art dynamic
cache partitioning scheme.
| [
{
"created": "Wed, 2 May 2018 18:41:58 GMT",
"version": "v1"
}
] | 2018-05-04 | [
[
"Ahmadian",
"Saba",
""
],
[
"Mutlu",
"Onur",
""
],
[
"Asadi",
"Hossein",
""
]
] | In recent years, high interest in using Virtual Machines (VMs) in data centers and Cloud computing has significantly increased the demand for high-performance data storage systems. Recent studies suggest using SSDs as a caching layer for HDD-based storage subsystems in virtualization platforms. Such studies neglect to address the endurance and cost of SSDs, which can significantly affect the efficiency of I/O caching. Moreover, previous studies only configure the cache size to provide the required performance level for each VM, while neglecting other important parameters such as cache write policy and request type, which can adversely affect both performance-per-cost and endurance. In this paper, we present a new high-Endurance and Cost-efficient I/O Caching (ECI-Cache) scheme for virtualized platforms, which can significantly improve both the performance-per-cost and endurance of storage subsystems as opposed to previously proposed I/O caching schemes. Unlike traditional I/O caching schemes which allocate cache size only based on reuse distance of accesses, we propose a new metric, Useful Reuse Distance (URD), which considers the request type in reuse distance calculation, resulting in improved performance-per-cost and endurance for the SSD cache. Via online characterization of workloads and using URD, ECI-Cache partitions the SSD cache across VMs and is able to dynamically adjust the cache size and write policy for each VM. To evaluate the proposed scheme, we have implemented ECI-Cache in an open source hypervisor, QEMU (version 2.8.0), on a server running the CentOS 7 operating system (kernel version 3.10.0-327). Experimental results show that our proposed scheme improves the performance, performance-per-cost, and endurance of the SSD cache by 17%, 30% and 65%, respectively, compared to the state-of-the-art dynamic cache partitioning scheme. |
2210.14582 | Yan Huang | Xiang Long, Yan Huang, Zhendong Liu, Lansheng Han, Haili Sun, Jingyuan
He | WebCrack: Dynamic Dictionary Adjustment for Web Weak Password Detection
based on Blasting Response Event Discrimination | 22 pages, 6 figures, 4 tables | null | null | null | cs.CR cs.DS | http://creativecommons.org/licenses/by/4.0/ | The feature diversity of different web systems in page elements, submission
contents and return information makes it difficult to detect weak password
automatically. To solve this problem, multi-factor correlation detection method
as integrated in the DBKER algorithm is proposed to achieve automatic detection
of web weak passwords and universal passwords. It generates password
dictionaries based on PCFG algorithm, proposes to judge blasting result via 4
steps with traditional static keyword features and dynamic page feature
information. Then the blasting failure events are discriminated and the
usernames are blasted based on response time. Thereafter the weak password
dictionary is dynamically adjusted according to the hints provided by the
response failure page. Based on the algorithm, this paper implements a
detection system named WebCrack. Experimental results of two blasting tests on
DedeCMS and Discuz! systems as well as a random backend test show that the
proposed method can detect weak passwords and universal passwords of various
web systems with an average accuracy rate of about 93.75%, providing security
advisories for users' password settings with strong practicability.
| [
{
"created": "Wed, 26 Oct 2022 09:34:41 GMT",
"version": "v1"
}
] | 2022-10-27 | [
[
"Long",
"Xiang",
""
],
[
"Huang",
"Yan",
""
],
[
"Liu",
"Zhendong",
""
],
[
"Han",
"Lansheng",
""
],
[
"Sun",
"Haili",
""
],
[
"He",
"Jingyuan",
""
]
] | The feature diversity of different web systems in page elements, submission contents and return information makes it difficult to detect weak password automatically. To solve this problem, multi-factor correlation detection method as integrated in the DBKER algorithm is proposed to achieve automatic detection of web weak passwords and universal passwords. It generates password dictionaries based on PCFG algorithm, proposes to judge blasting result via 4 steps with traditional static keyword features and dynamic page feature information. Then the blasting failure events are discriminated and the usernames are blasted based on response time. Thereafter the weak password dictionary is dynamically adjusted according to the hints provided by the response failure page. Based on the algorithm, this paper implements a detection system named WebCrack. Experimental results of two blasting tests on DedeCMS and Discuz! systems as well as a random backend test show that the proposed method can detect weak passwords and universal passwords of various web systems with an average accuracy rate of about 93.75%, providing security advisories for users' password settings with strong practicability. |
1703.07534 | Dong Liu | Jingxian Zhang and Dong Liu | Visual Analyses of Music History: A User-Centric Approach | null | null | null | null | cs.HC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Music history, referring to the records of users' listening or downloading
history in online music services, is the primary source for music service
providers to analyze users' preferences on music and thus to provide
personalized recommendations to users. In order to engage users into the
service and to improve user experience, it would be beneficial to provide
visual analyses of one user's music history as well as visualized
recommendations to that user. In this paper, we take a user-centric approach to
the design of such visual analyses. We start by investigating user needs on
such visual analyses and recommendations, then propose several different
visualization schemes, and perform a pilot study to collect user feedback on
the designed schemes. We further conduct user studies to verify the utility of
the proposed schemes, and the results not only demonstrate the effectiveness of
our proposed visualization, but also provide important insights to guide the
visualization design in the future.
| [
{
"created": "Wed, 22 Mar 2017 05:37:19 GMT",
"version": "v1"
}
] | 2017-03-23 | [
[
"Zhang",
"Jingxian",
""
],
[
"Liu",
"Dong",
""
]
] | Music history, referring to the records of users' listening or downloading history in online music services, is the primary source for music service providers to analyze users' preferences on music and thus to provide personalized recommendations to users. In order to engage users into the service and to improve user experience, it would be beneficial to provide visual analyses of one user's music history as well as visualized recommendations to that user. In this paper, we take a user-centric approach to the design of such visual analyses. We start by investigating user needs on such visual analyses and recommendations, then propose several different visualization schemes, and perform a pilot study to collect user feedback on the designed schemes. We further conduct user studies to verify the utility of the proposed schemes, and the results not only demonstrate the effectiveness of our proposed visualization, but also provide important insights to guide the visualization design in the future. |
2403.11752 | Vigneshwaran Shankaran | Aditya Narayan Sankaran, Vigneshwaran Shankaran, Sampath Lonka, Rajesh
Sharma | Revisiting The Classics: A Study on Identifying and Rectifying Gender
Stereotypes in Rhymes and Poems | Accepted to appear at LREC-COLING 2024 | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Rhymes and poems are a powerful medium for transmitting cultural norms and
societal roles. However, the pervasive existence of gender stereotypes in these
works perpetuates biased perceptions and limits the scope of individuals'
identities. Past works have shown that stereotyping and prejudice emerge in
early childhood, and developmental research on causal mechanisms is critical
for understanding and controlling stereotyping and prejudice. This work
contributes by gathering a dataset of rhymes and poems to identify gender
stereotypes and propose a model with 97% accuracy to identify gender bias.
Gender stereotypes were rectified using a Large Language Model (LLM) and its
effectiveness was evaluated in a comparative survey against human educator
rectifications. To summarize, this work highlights the pervasive nature of
gender stereotypes in literary works and reveals the potential of LLMs to
rectify gender stereotypes. This study raises awareness and promotes
inclusivity within artistic expressions, making a significant contribution to
the discourse on gender equality.
| [
{
"created": "Mon, 18 Mar 2024 13:02:02 GMT",
"version": "v1"
},
{
"created": "Mon, 25 Mar 2024 16:33:12 GMT",
"version": "v2"
}
] | 2024-03-26 | [
[
"Sankaran",
"Aditya Narayan",
""
],
[
"Shankaran",
"Vigneshwaran",
""
],
[
"Lonka",
"Sampath",
""
],
[
"Sharma",
"Rajesh",
""
]
] | Rhymes and poems are a powerful medium for transmitting cultural norms and societal roles. However, the pervasive existence of gender stereotypes in these works perpetuates biased perceptions and limits the scope of individuals' identities. Past works have shown that stereotyping and prejudice emerge in early childhood, and developmental research on causal mechanisms is critical for understanding and controlling stereotyping and prejudice. This work contributes by gathering a dataset of rhymes and poems to identify gender stereotypes and propose a model with 97% accuracy to identify gender bias. Gender stereotypes were rectified using a Large Language Model (LLM) and its effectiveness was evaluated in a comparative survey against human educator rectifications. To summarize, this work highlights the pervasive nature of gender stereotypes in literary works and reveals the potential of LLMs to rectify gender stereotypes. This study raises awareness and promotes inclusivity within artistic expressions, making a significant contribution to the discourse on gender equality. |
1608.08505 | Carla Binucci | Carla Binucci, Markus Chimani, Walter Didimo, Giuseppe Liotta,
Fabrizio Montecchiani | Placing Arrows in Directed Graph Drawings | Appears in the Proceedings of the 24th International Symposium on
Graph Drawing and Network Visualization (GD 2016) | null | null | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider the problem of placing arrow heads in directed graph drawings
without them overlapping other drawn objects. This gives drawings where edge
directions can be deduced unambiguously. We show hardness of the problem,
present exact and heuristic algorithms, and report on a practical study.
| [
{
"created": "Tue, 30 Aug 2016 15:33:56 GMT",
"version": "v1"
}
] | 2016-08-31 | [
[
"Binucci",
"Carla",
""
],
[
"Chimani",
"Markus",
""
],
[
"Didimo",
"Walter",
""
],
[
"Liotta",
"Giuseppe",
""
],
[
"Montecchiani",
"Fabrizio",
""
]
] | We consider the problem of placing arrow heads in directed graph drawings without them overlapping other drawn objects. This gives drawings where edge directions can be deduced unambiguously. We show hardness of the problem, present exact and heuristic algorithms, and report on a practical study. |
2308.09604 | Xiaokang Pan | Jin Liu, Xiaokang Pan, Junwen Duan, Hongdong Li, Youqi Li, Zhe Qu | Faster Stochastic Variance Reduction Methods for Compositional MiniMax
Optimization | null | null | null | null | cs.LG math.OC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper delves into the realm of stochastic optimization for compositional
minimax optimization - a pivotal challenge across various machine learning
domains, including deep AUC and reinforcement learning policy evaluation.
Despite its significance, the problem of compositional minimax optimization is
still under-explored. Adding to the complexity, current methods of
compositional minimax optimization are plagued by sub-optimal complexities or
heavy reliance on sizable batch sizes. To respond to these constraints, this
paper introduces a novel method, called Nested STOchastic Recursive Momentum
(NSTORM), which can achieve the optimal sample complexity of $O(\kappa^3
/\epsilon^3 )$ to obtain the $\epsilon$-accuracy solution. We also demonstrate
that NSTORM can achieve the same sample complexity under the Polyak-\L
ojasiewicz (PL)-condition - an insightful extension of its capabilities. Yet,
NSTORM encounters an issue with its requirement for low learning rates,
potentially constraining its real-world applicability in machine learning. To
overcome this hurdle, we present ADAptive NSTORM (ADA-NSTORM) with adaptive
learning rates. We demonstrate that ADA-NSTORM can achieve the same sample
complexity but the experimental results show its more effectiveness. All the
proposed complexities indicate that our proposed methods can match lower bounds
to existing minimax optimizations, without requiring a large batch size in each
iteration. Extensive experiments support the efficiency of our proposed
methods.
| [
{
"created": "Fri, 18 Aug 2023 14:57:21 GMT",
"version": "v1"
},
{
"created": "Tue, 12 Dec 2023 05:28:51 GMT",
"version": "v2"
}
] | 2023-12-13 | [
[
"Liu",
"Jin",
""
],
[
"Pan",
"Xiaokang",
""
],
[
"Duan",
"Junwen",
""
],
[
"Li",
"Hongdong",
""
],
[
"Li",
"Youqi",
""
],
[
"Qu",
"Zhe",
""
]
] | This paper delves into the realm of stochastic optimization for compositional minimax optimization - a pivotal challenge across various machine learning domains, including deep AUC and reinforcement learning policy evaluation. Despite its significance, the problem of compositional minimax optimization is still under-explored. Adding to the complexity, current methods of compositional minimax optimization are plagued by sub-optimal complexities or heavy reliance on sizable batch sizes. To respond to these constraints, this paper introduces a novel method, called Nested STOchastic Recursive Momentum (NSTORM), which can achieve the optimal sample complexity of $O(\kappa^3 /\epsilon^3 )$ to obtain the $\epsilon$-accuracy solution. We also demonstrate that NSTORM can achieve the same sample complexity under the Polyak-\L ojasiewicz (PL)-condition - an insightful extension of its capabilities. Yet, NSTORM encounters an issue with its requirement for low learning rates, potentially constraining its real-world applicability in machine learning. To overcome this hurdle, we present ADAptive NSTORM (ADA-NSTORM) with adaptive learning rates. We demonstrate that ADA-NSTORM can achieve the same sample complexity but the experimental results show its more effectiveness. All the proposed complexities indicate that our proposed methods can match lower bounds to existing minimax optimizations, without requiring a large batch size in each iteration. Extensive experiments support the efficiency of our proposed methods. |
2208.12850 | Michael Baddeley Dr | Michael Baddeley, Yevgen Gyl, Markus Schuss, Xiaoyuan Ma, and Carlo
Alberto Boano | OSF: An Open-Source Framework for Synchronous Flooding over Multiple
Physical Layers | null | null | null | null | cs.NI | http://creativecommons.org/licenses/by/4.0/ | Flooding protocols based on concurrent transmissions are regarded as the most
reliable way to collect or disseminate data across a multi-hop low-power
wireless mesh network. Recent works have shown that such protocols are
effective for narrowband communication not only over IEEE 802.15.4, but also
over the BLE 5 physical layers (PHYs). However, to date, existing literature
has only built synchronous flooding solutions on top of a single PHY, and there
has been no attempt to leverage different PHYs at runtime to increase
performance. This paper fills this gap and presents OSF, an open-source
framework that enables the design of multi-PHY synchronous flooding solutions
thanks to a novel radio driver and middle-ware architecture capable of
dynamically switching the underlying physical layer. This allows exploitation
of the specific benefits of each PHY (e.g., higher data-rate, increased
robustness) on-demand during each flood, increasing performance. We tailor OSF
to the off-the-shelf nRF52840 platform, and showcase its benefits by comparing
single-PHY and multi-PHY synchronous flooding solutions on a real-world
testbed.
| [
{
"created": "Fri, 26 Aug 2022 19:40:29 GMT",
"version": "v1"
}
] | 2022-08-30 | [
[
"Baddeley",
"Michael",
""
],
[
"Gyl",
"Yevgen",
""
],
[
"Schuss",
"Markus",
""
],
[
"Ma",
"Xiaoyuan",
""
],
[
"Boano",
"Carlo Alberto",
""
]
] | Flooding protocols based on concurrent transmissions are regarded as the most reliable way to collect or disseminate data across a multi-hop low-power wireless mesh network. Recent works have shown that such protocols are effective for narrowband communication not only over IEEE 802.15.4, but also over the BLE 5 physical layers (PHYs). However, to date, existing literature has only built synchronous flooding solutions on top of a single PHY, and there has been no attempt to leverage different PHYs at runtime to increase performance. This paper fills this gap and presents OSF, an open-source framework that enables the design of multi-PHY synchronous flooding solutions thanks to a novel radio driver and middle-ware architecture capable of dynamically switching the underlying physical layer. This allows exploitation of the specific benefits of each PHY (e.g., higher data-rate, increased robustness) on-demand during each flood, increasing performance. We tailor OSF to the off-the-shelf nRF52840 platform, and showcase its benefits by comparing single-PHY and multi-PHY synchronous flooding solutions on a real-world testbed. |
2210.09604 | Xiaoning Liu | Xiaoning Liu | Perceptual Multi-Exposure Fusion | The current version is our previous work rejected by IEEE TMM. I'm
very sorry and I want to withdraw this submitted version. I will resubmit it
when I improve it in the future. The version involves some ideas we are doing | null | null | null | cs.CV eess.IV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As an ever-increasing demand for high dynamic range (HDR) scene shooting,
multi-exposure image fusion (MEF) technology has abounded. In recent years,
multi-scale exposure fusion approaches based on detail-enhancement have led the
way for improvement in highlight and shadow details. Most of such methods,
however, are too computationally expensive to be deployed on mobile devices.
This paper presents a perceptual multi-exposure fusion method that not just
ensures fine shadow/highlight details but with lower complexity than
detailenhanced methods. We analyze the potential defects of three classical
exposure measures in lieu of using detail-enhancement component and improve two
of them, namely adaptive Wellexposedness (AWE) and the gradient of color images
(3-D gradient). AWE designed in YCbCr color space considers the difference
between varying exposure images. 3-D gradient is employed to extract fine
details. We build a large-scale multiexposure benchmark dataset suitable for
static scenes, which contains 167 image sequences all told. Experiments on the
constructed dataset demonstrate that the proposed method exceeds existing eight
state-of-the-art approaches in terms of visually and MEF-SSIM value. Moreover,
our approach can achieve a better improvement for current image enhancement
techniques, ensuring fine detail in bright light.
| [
{
"created": "Tue, 18 Oct 2022 05:34:58 GMT",
"version": "v1"
},
{
"created": "Wed, 19 Oct 2022 06:58:48 GMT",
"version": "v2"
}
] | 2022-10-20 | [
[
"Liu",
"Xiaoning",
""
]
] | As an ever-increasing demand for high dynamic range (HDR) scene shooting, multi-exposure image fusion (MEF) technology has abounded. In recent years, multi-scale exposure fusion approaches based on detail-enhancement have led the way for improvement in highlight and shadow details. Most of such methods, however, are too computationally expensive to be deployed on mobile devices. This paper presents a perceptual multi-exposure fusion method that not just ensures fine shadow/highlight details but with lower complexity than detailenhanced methods. We analyze the potential defects of three classical exposure measures in lieu of using detail-enhancement component and improve two of them, namely adaptive Wellexposedness (AWE) and the gradient of color images (3-D gradient). AWE designed in YCbCr color space considers the difference between varying exposure images. 3-D gradient is employed to extract fine details. We build a large-scale multiexposure benchmark dataset suitable for static scenes, which contains 167 image sequences all told. Experiments on the constructed dataset demonstrate that the proposed method exceeds existing eight state-of-the-art approaches in terms of visually and MEF-SSIM value. Moreover, our approach can achieve a better improvement for current image enhancement techniques, ensuring fine detail in bright light. |
1904.01782 | Rui Liu | Rui Liu, Yu Liu, Xinyu Gong, Xiaogang Wang, Hongsheng Li | Conditional Adversarial Generative Flow for Controllable Image Synthesis | Accepted by CVPR 2019 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Flow-based generative models show great potential in image synthesis due to
its reversible pipeline and exact log-likelihood target, yet it suffers from
weak ability for conditional image synthesis, especially for multi-label or
unaware conditions. This is because the potential distribution of image
conditions is hard to measure precisely from its latent variable $z$. In this
paper, based on modeling a joint probabilistic density of an image and its
conditions, we propose a novel flow-based generative model named conditional
adversarial generative flow (CAGlow). Instead of disentangling attributes from
latent space, we blaze a new trail for learning an encoder to estimate the
mapping from condition space to latent space in an adversarial manner. Given a
specific condition $c$, CAGlow can encode it to a sampled $z$, and then enable
robust conditional image synthesis in complex situations like combining person
identity with multiple attributes. The proposed CAGlow can be implemented in
both supervised and unsupervised manners, thus can synthesize images with
conditional information like categories, attributes, and even some unknown
properties. Extensive experiments show that CAGlow ensures the independence of
different conditions and outperforms regular Glow to a significant extent.
| [
{
"created": "Wed, 3 Apr 2019 05:58:01 GMT",
"version": "v1"
}
] | 2019-04-04 | [
[
"Liu",
"Rui",
""
],
[
"Liu",
"Yu",
""
],
[
"Gong",
"Xinyu",
""
],
[
"Wang",
"Xiaogang",
""
],
[
"Li",
"Hongsheng",
""
]
] | Flow-based generative models show great potential in image synthesis due to its reversible pipeline and exact log-likelihood target, yet it suffers from weak ability for conditional image synthesis, especially for multi-label or unaware conditions. This is because the potential distribution of image conditions is hard to measure precisely from its latent variable $z$. In this paper, based on modeling a joint probabilistic density of an image and its conditions, we propose a novel flow-based generative model named conditional adversarial generative flow (CAGlow). Instead of disentangling attributes from latent space, we blaze a new trail for learning an encoder to estimate the mapping from condition space to latent space in an adversarial manner. Given a specific condition $c$, CAGlow can encode it to a sampled $z$, and then enable robust conditional image synthesis in complex situations like combining person identity with multiple attributes. The proposed CAGlow can be implemented in both supervised and unsupervised manners, thus can synthesize images with conditional information like categories, attributes, and even some unknown properties. Extensive experiments show that CAGlow ensures the independence of different conditions and outperforms regular Glow to a significant extent. |
1511.08063 | Julien Mineraud | Julien Mineraud and Sasu Tarkoma | Toward interoperability for the Internet of Things with meta-hubs | 7 pages, 4 figures | null | null | null | cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Internet of Things (IoT) envisions that objects may be connected to the
Internet, producing and consuming data in real-time. Today, numerous middleware
platforms are available to facilitate the communication with these objects.
Unfortunately, the interoperability of these platforms is very limited because
it requires to "manually" connect the services proposed by each platform. One
key design goal for our contribution is not to build yet another middleware,
but rather to augment the functionalities of existing systems via an extension
to support their integration into a network of heterogeneous IoT hubs. The
extension includes a RESTful API to manipulate the basic component of our
extension, the IoT feeds. The IoT feeds allow the platform's owner to
dynamically marshal the IoT features connected to the platform, as well as the
data that they produce. Furthermore, the feeds enable the owner to manage and
control the data flows before connecting them to his applications.
Subsequently, these feeds may also be published to meta-hubs in order to expose
them to third parties. We evaluated an implementation our extension for Android
systems to show the feasibility of managing the data flows using the RESTful
API on this platform.
| [
{
"created": "Wed, 25 Nov 2015 13:57:08 GMT",
"version": "v1"
}
] | 2015-11-26 | [
[
"Mineraud",
"Julien",
""
],
[
"Tarkoma",
"Sasu",
""
]
] | The Internet of Things (IoT) envisions that objects may be connected to the Internet, producing and consuming data in real-time. Today, numerous middleware platforms are available to facilitate the communication with these objects. Unfortunately, the interoperability of these platforms is very limited because it requires to "manually" connect the services proposed by each platform. One key design goal for our contribution is not to build yet another middleware, but rather to augment the functionalities of existing systems via an extension to support their integration into a network of heterogeneous IoT hubs. The extension includes a RESTful API to manipulate the basic component of our extension, the IoT feeds. The IoT feeds allow the platform's owner to dynamically marshal the IoT features connected to the platform, as well as the data that they produce. Furthermore, the feeds enable the owner to manage and control the data flows before connecting them to his applications. Subsequently, these feeds may also be published to meta-hubs in order to expose them to third parties. We evaluated an implementation our extension for Android systems to show the feasibility of managing the data flows using the RESTful API on this platform. |
2203.10579 | \`Alex R. Atrio | \`Alex R. Atrio, Andrei Popescu-Belis | Small Batch Sizes Improve Training of Low-Resource Neural MT | To be published in 18th International Conference on Natural Language
Processing (ICON 2021) | null | null | null | cs.CL cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | We study the role of an essential hyper-parameter that governs the training
of Transformers for neural machine translation in a low-resource setting: the
batch size. Using theoretical insights and experimental evidence, we argue
against the widespread belief that batch size should be set as large as allowed
by the memory of the GPUs. We show that in a low-resource setting, a smaller
batch size leads to higher scores in a shorter training time, and argue that
this is due to better regularization of the gradients during training.
| [
{
"created": "Sun, 20 Mar 2022 15:14:39 GMT",
"version": "v1"
}
] | 2022-03-22 | [
[
"Atrio",
"Àlex R.",
""
],
[
"Popescu-Belis",
"Andrei",
""
]
] | We study the role of an essential hyper-parameter that governs the training of Transformers for neural machine translation in a low-resource setting: the batch size. Using theoretical insights and experimental evidence, we argue against the widespread belief that batch size should be set as large as allowed by the memory of the GPUs. We show that in a low-resource setting, a smaller batch size leads to higher scores in a shorter training time, and argue that this is due to better regularization of the gradients during training. |
0805.4323 | Martin Hoefer | Ulrik Brandes, Martin Hoefer, Bobo Nick | Network Connection Games with Disconnected Equilibria | 18 pages, 4 figures, extended abstract in WINE 2008 | null | null | null | cs.GT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we extend a popular non-cooperative network creation game (NCG)
to allow for disconnected equilibrium networks. There are n players, each is a
vertex in a graph, and a strategy is a subset of players to build edges to. For
each edge a player must pay a cost \alpha, and the individual cost for a player
represents a trade-off between edge costs and shortest path lengths to all
other players. We extend the model to a penalized game (PCG), for which we
reduce the penalty counted towards the individual cost for a pair of
disconnected players to a finite value \beta. Our analysis concentrates on
existence, structure, and cost of disconnected Nash and strong equilibria.
Although the PCG is not a potential game, pure Nash equilibria always and pure
strong equilibria very often exist. We provide tight conditions under which
disconnected Nash (strong) equilibria can evolve. Components of these
equilibria must be Nash (strong) equilibria of a smaller NCG. However, in
contrast to the NCG, for almost all parameter values no tree is a stable
component. Finally, we present a detailed characterization of the price of
anarchy that reveals cases in which the price of anarchy is \Theta(n) and thus
several orders of magnitude larger than in the NCG. Perhaps surprisingly, the
strong price of anarchy increases to at most 4. This indicates that global
communication and coordination can be extremely valuable to overcome socially
inferior topologies in distributed selfish network design.
| [
{
"created": "Wed, 28 May 2008 12:09:15 GMT",
"version": "v1"
},
{
"created": "Mon, 27 Oct 2008 22:12:35 GMT",
"version": "v2"
}
] | 2008-10-28 | [
[
"Brandes",
"Ulrik",
""
],
[
"Hoefer",
"Martin",
""
],
[
"Nick",
"Bobo",
""
]
] | In this paper we extend a popular non-cooperative network creation game (NCG) to allow for disconnected equilibrium networks. There are n players, each is a vertex in a graph, and a strategy is a subset of players to build edges to. For each edge a player must pay a cost \alpha, and the individual cost for a player represents a trade-off between edge costs and shortest path lengths to all other players. We extend the model to a penalized game (PCG), for which we reduce the penalty counted towards the individual cost for a pair of disconnected players to a finite value \beta. Our analysis concentrates on existence, structure, and cost of disconnected Nash and strong equilibria. Although the PCG is not a potential game, pure Nash equilibria always and pure strong equilibria very often exist. We provide tight conditions under which disconnected Nash (strong) equilibria can evolve. Components of these equilibria must be Nash (strong) equilibria of a smaller NCG. However, in contrast to the NCG, for almost all parameter values no tree is a stable component. Finally, we present a detailed characterization of the price of anarchy that reveals cases in which the price of anarchy is \Theta(n) and thus several orders of magnitude larger than in the NCG. Perhaps surprisingly, the strong price of anarchy increases to at most 4. This indicates that global communication and coordination can be extremely valuable to overcome socially inferior topologies in distributed selfish network design. |
1903.01905 | Bernhard Kainz | Daniel Grzech, Lo\"ic le Folgoc, Mattias P. Heinrich, Bishesh Khanal,
Jakub Moll, Julia A. Schnabel, Ben Glocker, Bernhard Kainz | FastReg: Fast Non-Rigid Registration via Accelerated Optimisation on the
Manifold of Diffeomorphisms | There is an ongoing dispute about the presentation of this paper. It
will be withdrawn until the dispute is resoved | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present an implementation of a new approach to diffeomorphic non-rigid
registration of medical images. The method is based on optical flow and warps
images via gradient flow with the standard $L^2$ inner product. To compute the
transformation, we rely on accelerated optimisation on the manifold of
diffeomorphisms. We achieve regularity properties of Sobolev gradient flows,
which are expensive to compute, owing to a novel method of averaging the
gradients in time rather than space. We successfully register brain MRI and
challenging abdominal CT scans at speeds orders of magnitude faster than
previous approaches. We make our code available in a public repository:
https://github.com/dgrzech/fastreg
| [
{
"created": "Tue, 5 Mar 2019 15:41:47 GMT",
"version": "v1"
},
{
"created": "Tue, 23 Apr 2019 15:37:43 GMT",
"version": "v2"
},
{
"created": "Wed, 24 Apr 2019 10:02:27 GMT",
"version": "v3"
}
] | 2019-04-25 | [
[
"Grzech",
"Daniel",
""
],
[
"Folgoc",
"Loïc le",
""
],
[
"Heinrich",
"Mattias P.",
""
],
[
"Khanal",
"Bishesh",
""
],
[
"Moll",
"Jakub",
""
],
[
"Schnabel",
"Julia A.",
""
],
[
"Glocker",
"Ben",
""
],
[
"Kainz",
"Bernhard",
""
]
] | We present an implementation of a new approach to diffeomorphic non-rigid registration of medical images. The method is based on optical flow and warps images via gradient flow with the standard $L^2$ inner product. To compute the transformation, we rely on accelerated optimisation on the manifold of diffeomorphisms. We achieve regularity properties of Sobolev gradient flows, which are expensive to compute, owing to a novel method of averaging the gradients in time rather than space. We successfully register brain MRI and challenging abdominal CT scans at speeds orders of magnitude faster than previous approaches. We make our code available in a public repository: https://github.com/dgrzech/fastreg |
2105.02318 | Kishor Jothimurugan | Kishor Jothimurugan, Matthew Andrews, Jeongran Lee and Lorenzo Maggi | Learning Algorithms for Regenerative Stopping Problems with Applications
to Shipping Consolidation in Logistics | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study regenerative stopping problems in which the system starts anew
whenever the controller decides to stop and the long-term average cost is to be
minimized. Traditional model-based solutions involve estimating the underlying
process from data and computing strategies for the estimated model. In this
paper, we compare such solutions to deep reinforcement learning and imitation
learning which involve learning a neural network policy from simulations. We
evaluate the different approaches on a real-world problem of shipping
consolidation in logistics and demonstrate that deep learning can be
effectively used to solve such problems.
| [
{
"created": "Wed, 5 May 2021 20:45:46 GMT",
"version": "v1"
}
] | 2021-05-07 | [
[
"Jothimurugan",
"Kishor",
""
],
[
"Andrews",
"Matthew",
""
],
[
"Lee",
"Jeongran",
""
],
[
"Maggi",
"Lorenzo",
""
]
] | We study regenerative stopping problems in which the system starts anew whenever the controller decides to stop and the long-term average cost is to be minimized. Traditional model-based solutions involve estimating the underlying process from data and computing strategies for the estimated model. In this paper, we compare such solutions to deep reinforcement learning and imitation learning which involve learning a neural network policy from simulations. We evaluate the different approaches on a real-world problem of shipping consolidation in logistics and demonstrate that deep learning can be effectively used to solve such problems. |
1410.8127 | Ali Soltani Tehrani | Ali Soltani Tehrani, Jessica Chani, Thomas Eriksson, and Christian
Fager | Investigation of Parameter Adaptation in RF Power Amplifier Behavioral
Models | null | null | null | null | cs.SY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents an investigation into parameter adaptation in behavioral
model--based digital predistortion for radio frequency power amplifiers. A
novel measurement setup framework that emulates real--time adaptation in
transmitters is developed that allows evaluation of different parameters,
configurations and adaptation algorithms. This setup relieves the need for full
feedback loops for parameter adaptation while providing the flexibility needed
in the design process of parameter adaptation.
Issues such as convergence speed, sensitivity to quantization noise in the
feedback loop and predistortion performance are investigated for some different
parameter update algorithms using the proposed measurement setup. The approach
presented in this paper allows the possibility to analyze different aspects of
digital predistortion adaptation algorithms, and is an important enabling step
for further research on parameter adaptation before the real--time hardware is
implemented for use.
| [
{
"created": "Wed, 29 Oct 2014 19:59:34 GMT",
"version": "v1"
}
] | 2014-10-30 | [
[
"Tehrani",
"Ali Soltani",
""
],
[
"Chani",
"Jessica",
""
],
[
"Eriksson",
"Thomas",
""
],
[
"Fager",
"Christian",
""
]
] | This paper presents an investigation into parameter adaptation in behavioral model--based digital predistortion for radio frequency power amplifiers. A novel measurement setup framework that emulates real--time adaptation in transmitters is developed that allows evaluation of different parameters, configurations and adaptation algorithms. This setup relieves the need for full feedback loops for parameter adaptation while providing the flexibility needed in the design process of parameter adaptation. Issues such as convergence speed, sensitivity to quantization noise in the feedback loop and predistortion performance are investigated for some different parameter update algorithms using the proposed measurement setup. The approach presented in this paper allows the possibility to analyze different aspects of digital predistortion adaptation algorithms, and is an important enabling step for further research on parameter adaptation before the real--time hardware is implemented for use. |
2211.13577 | Cheng Feng | Cheng Feng and Pingge Hu | Learning Invariant Rules from Data for Interpretable Anomaly Detection | null | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | In the research area of anomaly detection, novel and promising methods are
frequently developed. However, most existing studies exclusively focus on the
detection task only and ignore the interpretability of the underlying models as
well as their detection results. Nevertheless, anomaly interpretation, which
aims to provide explanation of why specific data instances are identified as
anomalies, is an equally important task in many real-world applications. In
this work, we propose a novel framework which synergizes several machine
learning and data mining techniques to automatically learn invariant rules that
are consistently satisfied in a given dataset. The learned invariant rules can
provide explicit explanation of anomaly detection results in the inference
phase and thus are extremely useful for subsequent decision-making regarding
reported anomalies. Furthermore, our empirical evaluation shows that the
proposed method can also achieve comparable or even better performance in terms
of AUC and partial AUC on public benchmark datasets across various application
domains compared with start-of-the-art anomaly detection models.
| [
{
"created": "Thu, 24 Nov 2022 13:03:20 GMT",
"version": "v1"
},
{
"created": "Fri, 6 Jan 2023 03:51:28 GMT",
"version": "v2"
},
{
"created": "Fri, 13 Jan 2023 03:57:35 GMT",
"version": "v3"
}
] | 2023-01-16 | [
[
"Feng",
"Cheng",
""
],
[
"Hu",
"Pingge",
""
]
] | In the research area of anomaly detection, novel and promising methods are frequently developed. However, most existing studies exclusively focus on the detection task only and ignore the interpretability of the underlying models as well as their detection results. Nevertheless, anomaly interpretation, which aims to provide explanation of why specific data instances are identified as anomalies, is an equally important task in many real-world applications. In this work, we propose a novel framework which synergizes several machine learning and data mining techniques to automatically learn invariant rules that are consistently satisfied in a given dataset. The learned invariant rules can provide explicit explanation of anomaly detection results in the inference phase and thus are extremely useful for subsequent decision-making regarding reported anomalies. Furthermore, our empirical evaluation shows that the proposed method can also achieve comparable or even better performance in terms of AUC and partial AUC on public benchmark datasets across various application domains compared with start-of-the-art anomaly detection models. |
1709.05861 | Narotam Singh | Narotam Singh (1), Nittin Singh (1), Abhinav Dhall (1) ((1) Indian
Institute of Technology Ropar) | Continuous Multimodal Emotion Recognition Approach for AVEC 2017 | 4 pages, 3 figures, arXiv:1605.06778, arXiv:1512.03385 | null | null | null | cs.CV cs.LG cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper reports the analysis of audio and visual features in predicting
the continuous emotion dimensions under the seventh Audio/Visual Emotion
Challenge (AVEC 2017), which was done as part of a B.Tech. 2nd year internship
project. For visual features we used the HOG (Histogram of Gradients) features,
Fisher encodings of SIFT (Scale-Invariant Feature Transform) features based on
Gaussian mixture model (GMM) and some pretrained Convolutional Neural Network
layers as features; all these extracted for each video clip. For audio features
we used the Bag-of-audio-words (BoAW) representation of the LLDs (low-level
descriptors) generated by openXBOW provided by the organisers of the event.
Then we trained fully connected neural network regression model on the dataset
for all these different modalities. We applied multimodal fusion on the output
models to get the Concordance correlation coefficient on Development set as
well as Test set.
| [
{
"created": "Mon, 18 Sep 2017 11:01:43 GMT",
"version": "v1"
},
{
"created": "Tue, 24 Oct 2017 12:08:09 GMT",
"version": "v2"
}
] | 2017-10-25 | [
[
"Singh",
"Narotam",
""
],
[
"Singh",
"Nittin",
""
],
[
"Dhall",
"Abhinav",
""
]
] | This paper reports the analysis of audio and visual features in predicting the continuous emotion dimensions under the seventh Audio/Visual Emotion Challenge (AVEC 2017), which was done as part of a B.Tech. 2nd year internship project. For visual features we used the HOG (Histogram of Gradients) features, Fisher encodings of SIFT (Scale-Invariant Feature Transform) features based on Gaussian mixture model (GMM) and some pretrained Convolutional Neural Network layers as features; all these extracted for each video clip. For audio features we used the Bag-of-audio-words (BoAW) representation of the LLDs (low-level descriptors) generated by openXBOW provided by the organisers of the event. Then we trained fully connected neural network regression model on the dataset for all these different modalities. We applied multimodal fusion on the output models to get the Concordance correlation coefficient on Development set as well as Test set. |
1908.10149 | Michael Barz | Michael Barz and Daniel Sonntag | Incremental Improvement of a Question Answering System by Re-ranking
Answer Candidates using Machine Learning | Accepted for oral presentation at tenth International Workshop on
Spoken Dialogue Systems Technology (IWSDS) 2019 | null | 10.1007/978-981-15-9323-9_34 | null | cs.LG cs.CL cs.IR stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We implement a method for re-ranking top-10 results of a state-of-the-art
question answering (QA) system. The goal of our re-ranking approach is to
improve the answer selection given the user question and the top-10 candidates.
We focus on improving deployed QA systems that do not allow re-training or
re-training comes at a high cost. Our re-ranking approach learns a similarity
function using n-gram based features using the query, the answer and the
initial system confidence as input. Our contributions are: (1) we generate a QA
training corpus starting from 877 answers from the customer care domain of
T-Mobile Austria, (2) we implement a state-of-the-art QA pipeline using neural
sentence embeddings that encode queries in the same space than the answer
index, and (3) we evaluate the QA pipeline and our re-ranking approach using a
separately provided test set. The test set can be considered to be available
after deployment of the system, e.g., based on feedback of users. Our results
show that the system performance, in terms of top-n accuracy and the mean
reciprocal rank, benefits from re-ranking using gradient boosted regression
trees. On average, the mean reciprocal rank improves by 9.15%.
| [
{
"created": "Tue, 27 Aug 2019 11:54:23 GMT",
"version": "v1"
}
] | 2021-06-17 | [
[
"Barz",
"Michael",
""
],
[
"Sonntag",
"Daniel",
""
]
] | We implement a method for re-ranking top-10 results of a state-of-the-art question answering (QA) system. The goal of our re-ranking approach is to improve the answer selection given the user question and the top-10 candidates. We focus on improving deployed QA systems that do not allow re-training or re-training comes at a high cost. Our re-ranking approach learns a similarity function using n-gram based features using the query, the answer and the initial system confidence as input. Our contributions are: (1) we generate a QA training corpus starting from 877 answers from the customer care domain of T-Mobile Austria, (2) we implement a state-of-the-art QA pipeline using neural sentence embeddings that encode queries in the same space than the answer index, and (3) we evaluate the QA pipeline and our re-ranking approach using a separately provided test set. The test set can be considered to be available after deployment of the system, e.g., based on feedback of users. Our results show that the system performance, in terms of top-n accuracy and the mean reciprocal rank, benefits from re-ranking using gradient boosted regression trees. On average, the mean reciprocal rank improves by 9.15%. |
2308.11373 | Zesen Liu | Zesen Liu, Meng Guo, Weimin Bao and Zhongkui Li | Fast and Adaptive Multi-agent Planning under Collaborative Temporal
Logic Tasks via Poset Products | 16 pages, 9 figures | null | null | null | cs.RO | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Efficient coordination and planning is essential for large-scale multi-agent
systems that collaborate in a shared dynamic environment. Heuristic search
methods or learning-based approaches often lack the guarantee on correctness
and performance. Moreover, when the collaborative tasks contain both spatial
and temporal requirements, e.g., as Linear Temporal Logic (LTL) formulas,
formal methods provide a verifiable framework for task planning. However, since
the planning complexity grows exponentially with the number of agents and the
length of the task formula, existing studies are mostly limited to small
artificial cases. To address this issue, a new planning paradigm is proposed in
this work for system-wide temporal task formulas that are released online and
continually. It avoids two common bottlenecks in the traditional methods, i.e.,
(i) the direct translation of the complete task formula to the associated
B\"uchi automaton; and (ii) the synchronized product between the B\"uchi
automaton and the transition models of all agents. Instead, an adaptive
planning algorithm is proposed that computes the product of relaxed
partially-ordered sets (R-posets) on-the-fly, and assigns these subtasks to the
agents subject to the ordering constraints. It is shown that the first valid
plan can be derived with a polynomial time and memory complexity w.r.t. the
system size and the formula length. Our method can take into account task
formulas with a length of more than 400 and a fleet with more than $400$
agents, while most existing methods fail at the formula length of 25 within a
reasonable duration. The proposed method is validated on large fleets of
service robots in both simulation and hardware experiments.
| [
{
"created": "Tue, 22 Aug 2023 11:56:15 GMT",
"version": "v1"
},
{
"created": "Tue, 9 Apr 2024 14:01:16 GMT",
"version": "v2"
}
] | 2024-04-10 | [
[
"Liu",
"Zesen",
""
],
[
"Guo",
"Meng",
""
],
[
"Bao",
"Weimin",
""
],
[
"Li",
"Zhongkui",
""
]
] | Efficient coordination and planning is essential for large-scale multi-agent systems that collaborate in a shared dynamic environment. Heuristic search methods or learning-based approaches often lack the guarantee on correctness and performance. Moreover, when the collaborative tasks contain both spatial and temporal requirements, e.g., as Linear Temporal Logic (LTL) formulas, formal methods provide a verifiable framework for task planning. However, since the planning complexity grows exponentially with the number of agents and the length of the task formula, existing studies are mostly limited to small artificial cases. To address this issue, a new planning paradigm is proposed in this work for system-wide temporal task formulas that are released online and continually. It avoids two common bottlenecks in the traditional methods, i.e., (i) the direct translation of the complete task formula to the associated B\"uchi automaton; and (ii) the synchronized product between the B\"uchi automaton and the transition models of all agents. Instead, an adaptive planning algorithm is proposed that computes the product of relaxed partially-ordered sets (R-posets) on-the-fly, and assigns these subtasks to the agents subject to the ordering constraints. It is shown that the first valid plan can be derived with a polynomial time and memory complexity w.r.t. the system size and the formula length. Our method can take into account task formulas with a length of more than 400 and a fleet with more than $400$ agents, while most existing methods fail at the formula length of 25 within a reasonable duration. The proposed method is validated on large fleets of service robots in both simulation and hardware experiments. |
1005.0600 | Manuel Kauers | Manuel Kauers and Veronika Pillwein | When can we decide that a P-finite sequence is positive? | null | null | null | null | cs.SC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider two algorithms which can be used for proving positivity of
sequences that are defined by a linear recurrence equation with polynomial
coefficients (P-finite sequences). Both algorithms have in common that while
they do succeed on a great many examples, there is no guarantee for them to
terminate, and they do in fact not terminate for every input. For some
restricted classes of P-finite recurrence equations of order up to three we
provide a priori criteria that assert the termination of the algorithms.
| [
{
"created": "Tue, 4 May 2010 18:24:19 GMT",
"version": "v1"
}
] | 2010-05-05 | [
[
"Kauers",
"Manuel",
""
],
[
"Pillwein",
"Veronika",
""
]
] | We consider two algorithms which can be used for proving positivity of sequences that are defined by a linear recurrence equation with polynomial coefficients (P-finite sequences). Both algorithms have in common that while they do succeed on a great many examples, there is no guarantee for them to terminate, and they do in fact not terminate for every input. For some restricted classes of P-finite recurrence equations of order up to three we provide a priori criteria that assert the termination of the algorithms. |
1510.02395 | Mohammed Gollapalli Dr. | Mohammed Gollapalli | Literature Review Of Attribute Level And Structure Level Data Linkage
Techniques | 20 pages | International Journal of Data Mining & Knowledge Management
Process (IJDKP) Vol.5, No.5, September 2015 | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Data Linkage is an important step that can provide valuable insights for
evidence-based decision making, especially for crucial events. Performing
sensible queries across heterogeneous databases containing millions of records
is a complex task that requires a complete understanding of each contributing
databases schema to define the structure of its information. The key aim is to
approximate the structure and content of the induced data into a concise
synopsis in order to extract and link meaningful data-driven facts. We identify
such problems as four major research issues in Data Linkage: associated costs
in pair-wise matching, record matching overheads, semantic flow of information
restrictions, and single order classification limitations. In this paper, we
give a literature review of research in Data Linkage. The purpose for this
review is to establish a basic understanding of Data Linkage, and to discuss
the background in the Data Linkage research domain. Particularly, we focus on
the literature related to the recent advancements in Approximate Matching
algorithms at Attribute Level and Structure Level. Their efficiency,
functionality and limitations are critically analysed and open-ended problems
have been exposed.
| [
{
"created": "Wed, 7 Oct 2015 12:38:24 GMT",
"version": "v1"
}
] | 2015-10-09 | [
[
"Gollapalli",
"Mohammed",
""
]
] | Data Linkage is an important step that can provide valuable insights for evidence-based decision making, especially for crucial events. Performing sensible queries across heterogeneous databases containing millions of records is a complex task that requires a complete understanding of each contributing databases schema to define the structure of its information. The key aim is to approximate the structure and content of the induced data into a concise synopsis in order to extract and link meaningful data-driven facts. We identify such problems as four major research issues in Data Linkage: associated costs in pair-wise matching, record matching overheads, semantic flow of information restrictions, and single order classification limitations. In this paper, we give a literature review of research in Data Linkage. The purpose for this review is to establish a basic understanding of Data Linkage, and to discuss the background in the Data Linkage research domain. Particularly, we focus on the literature related to the recent advancements in Approximate Matching algorithms at Attribute Level and Structure Level. Their efficiency, functionality and limitations are critically analysed and open-ended problems have been exposed. |
2205.08891 | Jingqing Zhang | Jingqing Zhang, Atri Sharma, Luis Bolanos, Tong Li, Ashwani Tanwar,
Vibhor Gupta, Yike Guo | A Scalable Workflow to Build Machine Learning Classifiers with
Clinician-in-the-Loop to Identify Patients in Specific Diseases | Under review | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Clinicians may rely on medical coding systems such as International
Classification of Diseases (ICD) to identify patients with diseases from
Electronic Health Records (EHRs). However, due to the lack of detail and
specificity as well as a probability of miscoding, recent studies suggest the
ICD codes often cannot characterise patients accurately for specific diseases
in real clinical practice, and as a result, using them to find patients for
studies or trials can result in high failure rates and missing out on uncoded
patients. Manual inspection of all patients at scale is not feasible as it is
highly costly and slow.
This paper proposes a scalable workflow which leverages both structured data
and unstructured textual notes from EHRs with techniques including NLP, AutoML
and Clinician-in-the-Loop mechanism to build machine learning classifiers to
identify patients at scale with given diseases, especially those who might
currently be miscoded or missed by ICD codes.
Case studies in the MIMIC-III dataset were conducted where the proposed
workflow demonstrates a higher classification performance in terms of F1 scores
compared to simply using ICD codes on gold testing subset to identify patients
with Ovarian Cancer (0.901 vs 0.814), Lung Cancer (0.859 vs 0.828), Cancer
Cachexia (0.862 vs 0.650), and Lupus Nephritis (0.959 vs 0.855). Also, the
proposed workflow that leverages unstructured notes consistently outperforms
the baseline that uses structured data only with an increase of F1 (Ovarian
Cancer 0.901 vs 0.719, Lung Cancer 0.859 vs 0.787, Cancer Cachexia 0.862 vs
0.838 and Lupus Nephritis 0.959 vs 0.785). Experiments on the large testing set
also demonstrate the proposed workflow can find more patients who are miscoded
or missed by ICD codes. Moreover, interpretability studies are also conducted
to clinically validate the top impact features of the classifiers.
| [
{
"created": "Wed, 18 May 2022 12:24:07 GMT",
"version": "v1"
}
] | 2022-05-19 | [
[
"Zhang",
"Jingqing",
""
],
[
"Sharma",
"Atri",
""
],
[
"Bolanos",
"Luis",
""
],
[
"Li",
"Tong",
""
],
[
"Tanwar",
"Ashwani",
""
],
[
"Gupta",
"Vibhor",
""
],
[
"Guo",
"Yike",
""
]
] | Clinicians may rely on medical coding systems such as International Classification of Diseases (ICD) to identify patients with diseases from Electronic Health Records (EHRs). However, due to the lack of detail and specificity as well as a probability of miscoding, recent studies suggest the ICD codes often cannot characterise patients accurately for specific diseases in real clinical practice, and as a result, using them to find patients for studies or trials can result in high failure rates and missing out on uncoded patients. Manual inspection of all patients at scale is not feasible as it is highly costly and slow. This paper proposes a scalable workflow which leverages both structured data and unstructured textual notes from EHRs with techniques including NLP, AutoML and Clinician-in-the-Loop mechanism to build machine learning classifiers to identify patients at scale with given diseases, especially those who might currently be miscoded or missed by ICD codes. Case studies in the MIMIC-III dataset were conducted where the proposed workflow demonstrates a higher classification performance in terms of F1 scores compared to simply using ICD codes on gold testing subset to identify patients with Ovarian Cancer (0.901 vs 0.814), Lung Cancer (0.859 vs 0.828), Cancer Cachexia (0.862 vs 0.650), and Lupus Nephritis (0.959 vs 0.855). Also, the proposed workflow that leverages unstructured notes consistently outperforms the baseline that uses structured data only with an increase of F1 (Ovarian Cancer 0.901 vs 0.719, Lung Cancer 0.859 vs 0.787, Cancer Cachexia 0.862 vs 0.838 and Lupus Nephritis 0.959 vs 0.785). Experiments on the large testing set also demonstrate the proposed workflow can find more patients who are miscoded or missed by ICD codes. Moreover, interpretability studies are also conducted to clinically validate the top impact features of the classifiers. |
2405.19701 | Lavanya Prahallad | Lavanya Prahallad, Radhika Mamidi | Significance of Chain of Thought in Gender Bias Mitigation for
English-Dravidian Machine Translation | 6 pages | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Gender bias in machine translation (MT) sys- tems poses a significant
challenge to achieving accurate and inclusive translations. This paper examines
gender bias in machine translation systems for languages such as Telugu and
Kan- nada from the Dravidian family, analyzing how gender inflections affect
translation accuracy and neutrality using Google Translate and Chat- GPT. It
finds that while plural forms can reduce bias, individual-centric sentences
often main- tain the bias due to historical stereotypes. The study evaluates
the Chain of Thought process- ing, noting significant bias mitigation from 80%
to 4% in Telugu and from 40% to 0% in Kan- nada. It also compares Telugu and
Kannada translations, emphasizing the need for language specific strategies to
address these challenges and suggesting directions for future research to
enhance fairness in both data preparation and prompts during inference.
| [
{
"created": "Thu, 30 May 2024 05:26:57 GMT",
"version": "v1"
},
{
"created": "Mon, 3 Jun 2024 15:59:34 GMT",
"version": "v2"
}
] | 2024-06-04 | [
[
"Prahallad",
"Lavanya",
""
],
[
"Mamidi",
"Radhika",
""
]
] | Gender bias in machine translation (MT) sys- tems poses a significant challenge to achieving accurate and inclusive translations. This paper examines gender bias in machine translation systems for languages such as Telugu and Kan- nada from the Dravidian family, analyzing how gender inflections affect translation accuracy and neutrality using Google Translate and Chat- GPT. It finds that while plural forms can reduce bias, individual-centric sentences often main- tain the bias due to historical stereotypes. The study evaluates the Chain of Thought process- ing, noting significant bias mitigation from 80% to 4% in Telugu and from 40% to 0% in Kan- nada. It also compares Telugu and Kannada translations, emphasizing the need for language specific strategies to address these challenges and suggesting directions for future research to enhance fairness in both data preparation and prompts during inference. |
1312.5912 | Andrea Cal\`i PhD | Andrea Cal\`i and Riccardo Torlone | Containment of Schema Mappings for Data Exchange (Preliminary Report) | 11 pages, no figures | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In data exchange, data are materialised from a source schema to a target
schema, according to suitable source-to-target constraints. Constraints are
also expressed on the target schema to represent the domain of interest. A
schema mapping is the union of the source-to-target and of the target
constraints.
In this paper, we address the problem of containment of schema mappings for
data exchange, which has been recently proposed in this framework as a step
towards the optimization of data exchange settings. We refer to a natural
notion of containment that relies on the behaviour of schema mappings with
respect to conjunctive query answering, in the presence of so-called LAV TGDs
as target constraints. Our contribution is a practical technique for testing
the containment based on the existence of a homomorphism between special
"dummy" instances, which can be easily built from schema mappings.
We argue that containment of schema mappings is decidable for most practical
cases, and we set the basis for further investigations in the topic. This paper
extends our preliminary results.
| [
{
"created": "Fri, 20 Dec 2013 12:13:11 GMT",
"version": "v1"
},
{
"created": "Tue, 31 Dec 2013 09:51:14 GMT",
"version": "v2"
}
] | 2014-01-03 | [
[
"Calì",
"Andrea",
""
],
[
"Torlone",
"Riccardo",
""
]
] | In data exchange, data are materialised from a source schema to a target schema, according to suitable source-to-target constraints. Constraints are also expressed on the target schema to represent the domain of interest. A schema mapping is the union of the source-to-target and of the target constraints. In this paper, we address the problem of containment of schema mappings for data exchange, which has been recently proposed in this framework as a step towards the optimization of data exchange settings. We refer to a natural notion of containment that relies on the behaviour of schema mappings with respect to conjunctive query answering, in the presence of so-called LAV TGDs as target constraints. Our contribution is a practical technique for testing the containment based on the existence of a homomorphism between special "dummy" instances, which can be easily built from schema mappings. We argue that containment of schema mappings is decidable for most practical cases, and we set the basis for further investigations in the topic. This paper extends our preliminary results. |
1411.1607 | Alan Edelman | Jeff Bezanson, Alan Edelman, Stefan Karpinski, Viral B. Shah | Julia: A Fresh Approach to Numerical Computing | 37 pages | null | null | null | cs.MS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Bridging cultures that have often been distant, Julia combines expertise from
the diverse fields of computer science and computational science to create a
new approach to numerical computing. Julia is designed to be easy and fast.
Julia questions notions generally held as "laws of nature" by practitioners of
numerical computing:
1. High-level dynamic programs have to be slow.
2. One must prototype in one language and then rewrite in another language
for speed or deployment, and
3. There are parts of a system for the programmer, and other parts best left
untouched as they are built by the experts.
We introduce the Julia programming language and its design --- a dance
between specialization and abstraction. Specialization allows for custom
treatment. Multiple dispatch, a technique from computer science, picks the
right algorithm for the right circumstance. Abstraction, what good computation
is really about, recognizes what remains the same after differences are
stripped away. Abstractions in mathematics are captured as code through another
technique from computer science, generic programming.
Julia shows that one can have machine performance without sacrificing human
convenience.
| [
{
"created": "Thu, 6 Nov 2014 13:39:40 GMT",
"version": "v1"
},
{
"created": "Fri, 7 Nov 2014 11:19:21 GMT",
"version": "v2"
},
{
"created": "Fri, 12 Dec 2014 22:40:09 GMT",
"version": "v3"
},
{
"created": "Sun, 19 Jul 2015 19:58:28 GMT",
"version": "v4"
}
] | 2015-07-21 | [
[
"Bezanson",
"Jeff",
""
],
[
"Edelman",
"Alan",
""
],
[
"Karpinski",
"Stefan",
""
],
[
"Shah",
"Viral B.",
""
]
] | Bridging cultures that have often been distant, Julia combines expertise from the diverse fields of computer science and computational science to create a new approach to numerical computing. Julia is designed to be easy and fast. Julia questions notions generally held as "laws of nature" by practitioners of numerical computing: 1. High-level dynamic programs have to be slow. 2. One must prototype in one language and then rewrite in another language for speed or deployment, and 3. There are parts of a system for the programmer, and other parts best left untouched as they are built by the experts. We introduce the Julia programming language and its design --- a dance between specialization and abstraction. Specialization allows for custom treatment. Multiple dispatch, a technique from computer science, picks the right algorithm for the right circumstance. Abstraction, what good computation is really about, recognizes what remains the same after differences are stripped away. Abstractions in mathematics are captured as code through another technique from computer science, generic programming. Julia shows that one can have machine performance without sacrificing human convenience. |
2208.06946 | Fangyi Yu | Fangyi Yu and Miguel Vargas Martin | Targeted Honeyword Generation with Language Models | 8 pages, 7 tables, 2 figures | null | null | null | cs.AI cs.CR | http://creativecommons.org/licenses/by/4.0/ | Honeywords are fictitious passwords inserted into databases in order to
identify password breaches. The major difficulty is how to produce honeywords
that are difficult to distinguish from real passwords. Although the generation
of honeywords has been widely investigated in the past, the majority of
existing research assumes attackers have no knowledge of the users. These
honeyword generating techniques (HGTs) may utterly fail if attackers exploit
users' personally identifiable information (PII) and the real passwords include
users' PII. In this paper, we propose to build a more secure and trustworthy
authentication system that employs off-the-shelf pre-trained language models
which require no further training on real passwords to produce honeywords while
retaining the PII of the associated real password, therefore significantly
raising the bar for attackers.
We conducted a pilot experiment in which individuals are asked to distinguish
between authentic passwords and honeywords when the username is provided for
GPT-3 and a tweaking technique. Results show that it is extremely difficult to
distinguish the real passwords from the artifical ones for both techniques. We
speculate that a larger sample size could reveal a significant difference
between the two HGT techniques, favouring our proposed approach.
| [
{
"created": "Mon, 15 Aug 2022 00:06:29 GMT",
"version": "v1"
},
{
"created": "Tue, 23 Aug 2022 16:12:27 GMT",
"version": "v2"
}
] | 2022-08-24 | [
[
"Yu",
"Fangyi",
""
],
[
"Martin",
"Miguel Vargas",
""
]
] | Honeywords are fictitious passwords inserted into databases in order to identify password breaches. The major difficulty is how to produce honeywords that are difficult to distinguish from real passwords. Although the generation of honeywords has been widely investigated in the past, the majority of existing research assumes attackers have no knowledge of the users. These honeyword generating techniques (HGTs) may utterly fail if attackers exploit users' personally identifiable information (PII) and the real passwords include users' PII. In this paper, we propose to build a more secure and trustworthy authentication system that employs off-the-shelf pre-trained language models which require no further training on real passwords to produce honeywords while retaining the PII of the associated real password, therefore significantly raising the bar for attackers. We conducted a pilot experiment in which individuals are asked to distinguish between authentic passwords and honeywords when the username is provided for GPT-3 and a tweaking technique. Results show that it is extremely difficult to distinguish the real passwords from the artifical ones for both techniques. We speculate that a larger sample size could reveal a significant difference between the two HGT techniques, favouring our proposed approach. |
1903.02255 | Peter Boyvalenkov | Peter Boyvalenkov, Danyo Danev | Linear Programming Bounds | 22 pages | null | null | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This chapter is written for the forthcoming book "A Concise Encyclopedia of
Coding Theory" (CRC press), edited by W. Cary Huffman, Jon-Lark Kim, and
Patrick Sol\'e. This book will collect short but foundational articles,
emphasizing definitions, examples, exhaustive references, and basic facts. The
target audience of the Encyclopedia is upper level undergraduates and graduate
students.
| [
{
"created": "Wed, 6 Mar 2019 09:16:13 GMT",
"version": "v1"
}
] | 2019-03-07 | [
[
"Boyvalenkov",
"Peter",
""
],
[
"Danev",
"Danyo",
""
]
] | This chapter is written for the forthcoming book "A Concise Encyclopedia of Coding Theory" (CRC press), edited by W. Cary Huffman, Jon-Lark Kim, and Patrick Sol\'e. This book will collect short but foundational articles, emphasizing definitions, examples, exhaustive references, and basic facts. The target audience of the Encyclopedia is upper level undergraduates and graduate students. |
2203.16952 | Swalpa Kumar Roy Dr. | Swalpa Kumar Roy, Ankur Deria, Danfeng Hong, Behnood Rasti, Antonio
Plaza, Jocelyn Chanussot | Multimodal Fusion Transformer for Remote Sensing Image Classification | Published in IEEE Transactions on Geoscience and Remote Sensing | null | 10.1109/TGRS.2023.3286826 | null | cs.CV cs.LG eess.IV | http://creativecommons.org/licenses/by/4.0/ | Vision transformers (ViTs) have been trending in image classification tasks
due to their promising performance when compared to convolutional neural
networks (CNNs). As a result, many researchers have tried to incorporate ViTs
in hyperspectral image (HSI) classification tasks. To achieve satisfactory
performance, close to that of CNNs, transformers need fewer parameters. ViTs
and other similar transformers use an external classification (CLS) token which
is randomly initialized and often fails to generalize well, whereas other
sources of multimodal datasets, such as light detection and ranging (LiDAR)
offer the potential to improve these models by means of a CLS. In this paper,
we introduce a new multimodal fusion transformer (MFT) network which comprises
a multihead cross patch attention (mCrossPA) for HSI land-cover classification.
Our mCrossPA utilizes other sources of complementary information in addition to
the HSI in the transformer encoder to achieve better generalization. The
concept of tokenization is used to generate CLS and HSI patch tokens, helping
to learn a {distinctive representation} in a reduced and hierarchical feature
space. Extensive experiments are carried out on {widely used benchmark}
datasets {i.e.,} the University of Houston, Trento, University of Southern
Mississippi Gulfpark (MUUFL), and Augsburg. We compare the results of the
proposed MFT model with other state-of-the-art transformers, classical CNNs,
and conventional classifiers models. The superior performance achieved by the
proposed model is due to the use of multihead cross patch attention. The source
code will be made available publicly at
\url{https://github.com/AnkurDeria/MFT}.}
| [
{
"created": "Thu, 31 Mar 2022 11:18:41 GMT",
"version": "v1"
},
{
"created": "Tue, 20 Jun 2023 17:58:25 GMT",
"version": "v2"
}
] | 2023-06-21 | [
[
"Roy",
"Swalpa Kumar",
""
],
[
"Deria",
"Ankur",
""
],
[
"Hong",
"Danfeng",
""
],
[
"Rasti",
"Behnood",
""
],
[
"Plaza",
"Antonio",
""
],
[
"Chanussot",
"Jocelyn",
""
]
] | Vision transformers (ViTs) have been trending in image classification tasks due to their promising performance when compared to convolutional neural networks (CNNs). As a result, many researchers have tried to incorporate ViTs in hyperspectral image (HSI) classification tasks. To achieve satisfactory performance, close to that of CNNs, transformers need fewer parameters. ViTs and other similar transformers use an external classification (CLS) token which is randomly initialized and often fails to generalize well, whereas other sources of multimodal datasets, such as light detection and ranging (LiDAR) offer the potential to improve these models by means of a CLS. In this paper, we introduce a new multimodal fusion transformer (MFT) network which comprises a multihead cross patch attention (mCrossPA) for HSI land-cover classification. Our mCrossPA utilizes other sources of complementary information in addition to the HSI in the transformer encoder to achieve better generalization. The concept of tokenization is used to generate CLS and HSI patch tokens, helping to learn a {distinctive representation} in a reduced and hierarchical feature space. Extensive experiments are carried out on {widely used benchmark} datasets {i.e.,} the University of Houston, Trento, University of Southern Mississippi Gulfpark (MUUFL), and Augsburg. We compare the results of the proposed MFT model with other state-of-the-art transformers, classical CNNs, and conventional classifiers models. The superior performance achieved by the proposed model is due to the use of multihead cross patch attention. The source code will be made available publicly at \url{https://github.com/AnkurDeria/MFT}.} |
cs/0701118 | Mohammad Ali Maddah-Ali Mr. | Mohammad Ali Maddah-Ali, Hajar Mahdavi-Doost, and Amir K. Khandani | Optimal Order of Decoding for Max-Min Fairness in $K$-User Memoryless
Interference Channels | 11 Pages, Submitted to IEEE International Symposium on Information
Theory(ISIT 2007) | null | 10.1109/ISIT.2007.4557653 | null | cs.IT math.IT | null | A $K$-user memoryless interference channel is considered where each receiver
sequentially decodes the data of a subset of transmitters before it decodes the
data of the designated transmitter. Therefore, the data rate of each
transmitter depends on (i) the subset of receivers which decode the data of
that transmitter, (ii) the decoding order, employed at each of these receivers.
In this paper, a greedy algorithm is developed to find the users which are
decoded at each receiver and the corresponding decoding order such that the
minimum rate of the users is maximized. It is proven that the proposed
algorithm is optimal.
| [
{
"created": "Thu, 18 Jan 2007 20:54:03 GMT",
"version": "v1"
}
] | 2016-11-15 | [
[
"Maddah-Ali",
"Mohammad Ali",
""
],
[
"Mahdavi-Doost",
"Hajar",
""
],
[
"Khandani",
"Amir K.",
""
]
] | A $K$-user memoryless interference channel is considered where each receiver sequentially decodes the data of a subset of transmitters before it decodes the data of the designated transmitter. Therefore, the data rate of each transmitter depends on (i) the subset of receivers which decode the data of that transmitter, (ii) the decoding order, employed at each of these receivers. In this paper, a greedy algorithm is developed to find the users which are decoded at each receiver and the corresponding decoding order such that the minimum rate of the users is maximized. It is proven that the proposed algorithm is optimal. |
2201.04205 | Waleed Yousef | Waleed A.Yousef, Hisham E. Mohammed, Andrew A. Naguib, Rafat S. Eid,
Sherif E. Emabrak, Ahmed F. Hamed, Yusuf M. Khalifa, Shrouk T. AbdElrheem,
Eman A. Awad, Sara G. Gaafar, Alaa M. Mamdoh, Nada A. Shawky | JSOL: JavaScript Open-source Library for Grammar of Graphics | null | null | null | null | cs.GR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we introduce the JavaScript Open-source Library (\libname), a
high-level grammar for representing data in visualization graphs and plots.
\libname~perspective on the grammar of graphics is unique; it provides
state-of-art rules for encoding visual primitives that can be used to generate
a known scene or to invent a new one. \libname~has ton rules developed
specifically for data-munging, mapping, and visualization through many layers,
such as algebra, scales, and geometries. Additionally, it has a compiler that
incorporates and combines all rules specified by a user and put them in a flow
to validate it as a visualization grammar and check its requisites. Users can
customize scenes through a pipeline that either puts customized rules or comes
with new ones. We evaluated \libname~on a multitude of plots to check rules
specification of customizing a specific plot. Although the project is still
under development and many enhancements are under construction, this paper
describes the first developed version of \libname, circa 2016, where an
open-source version of it is available. One immediate practical deployment for
JSOl is to be integrated with the open-source version of the Data Visualization
Platform (DVP) \citep{Yousef2019DVP-arxiv}
| [
{
"created": "Tue, 11 Jan 2022 21:23:23 GMT",
"version": "v1"
}
] | 2022-01-13 | [
[
"Yousef",
"Waleed A.",
""
],
[
"Mohammed",
"Hisham E.",
""
],
[
"Naguib",
"Andrew A.",
""
],
[
"Eid",
"Rafat S.",
""
],
[
"Emabrak",
"Sherif E.",
""
],
[
"Hamed",
"Ahmed F.",
""
],
[
"Khalifa",
"Yusuf M.",
""
],
[
"AbdElrheem",
"Shrouk T.",
""
],
[
"Awad",
"Eman A.",
""
],
[
"Gaafar",
"Sara G.",
""
],
[
"Mamdoh",
"Alaa M.",
""
],
[
"Shawky",
"Nada A.",
""
]
] | In this paper, we introduce the JavaScript Open-source Library (\libname), a high-level grammar for representing data in visualization graphs and plots. \libname~perspective on the grammar of graphics is unique; it provides state-of-art rules for encoding visual primitives that can be used to generate a known scene or to invent a new one. \libname~has ton rules developed specifically for data-munging, mapping, and visualization through many layers, such as algebra, scales, and geometries. Additionally, it has a compiler that incorporates and combines all rules specified by a user and put them in a flow to validate it as a visualization grammar and check its requisites. Users can customize scenes through a pipeline that either puts customized rules or comes with new ones. We evaluated \libname~on a multitude of plots to check rules specification of customizing a specific plot. Although the project is still under development and many enhancements are under construction, this paper describes the first developed version of \libname, circa 2016, where an open-source version of it is available. One immediate practical deployment for JSOl is to be integrated with the open-source version of the Data Visualization Platform (DVP) \citep{Yousef2019DVP-arxiv} |
1912.11855 | Aditi Sharma | Aditi Sharma, Ravi Ranjan | Software Effort Estimation using Neuro Fuzzy Inference System: Past and
Present | null | International Journal on Recent and Innovation Trends in Computing
and Communication ISSN: 2321-8169 2017 | null | null | cs.SE cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Most important reason for project failure is poor effort estimation. Software
development effort estimation is needed for assigning appropriate team members
for development, allocating resources for software development, binding etc.
Inaccurate software estimation may lead to delay in project, over-budget or
cancellation of the project. But the effort estimation models are not very
efficient. In this paper, we are analyzing the new approach for estimation i.e.
Neuro Fuzzy Inference System (NFIS). It is a mixture model that consolidates
the components of artificial neural network with fuzzy logic for giving a
better estimation.
| [
{
"created": "Thu, 26 Dec 2019 12:55:38 GMT",
"version": "v1"
}
] | 2019-12-30 | [
[
"Sharma",
"Aditi",
""
],
[
"Ranjan",
"Ravi",
""
]
] | Most important reason for project failure is poor effort estimation. Software development effort estimation is needed for assigning appropriate team members for development, allocating resources for software development, binding etc. Inaccurate software estimation may lead to delay in project, over-budget or cancellation of the project. But the effort estimation models are not very efficient. In this paper, we are analyzing the new approach for estimation i.e. Neuro Fuzzy Inference System (NFIS). It is a mixture model that consolidates the components of artificial neural network with fuzzy logic for giving a better estimation. |
2012.04580 | James Jordon | James Jordon, Alan Wilson and Mihaela van der Schaar | Synthetic Data: Opening the data floodgates to enable faster, more
directed development of machine learning methods | null | null | null | null | cs.LG cs.CY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many ground-breaking advancements in machine learning can be attributed to
the availability of a large volume of rich data. Unfortunately, many
large-scale datasets are highly sensitive, such as healthcare data, and are not
widely available to the machine learning community. Generating synthetic data
with privacy guarantees provides one such solution, allowing meaningful
research to be carried out "at scale" - by allowing the entirety of the machine
learning community to potentially accelerate progress within a given field. In
this article, we provide a high-level view of synthetic data: what it means,
how we might evaluate it and how we might use it.
| [
{
"created": "Tue, 8 Dec 2020 17:26:10 GMT",
"version": "v1"
}
] | 2020-12-09 | [
[
"Jordon",
"James",
""
],
[
"Wilson",
"Alan",
""
],
[
"van der Schaar",
"Mihaela",
""
]
] | Many ground-breaking advancements in machine learning can be attributed to the availability of a large volume of rich data. Unfortunately, many large-scale datasets are highly sensitive, such as healthcare data, and are not widely available to the machine learning community. Generating synthetic data with privacy guarantees provides one such solution, allowing meaningful research to be carried out "at scale" - by allowing the entirety of the machine learning community to potentially accelerate progress within a given field. In this article, we provide a high-level view of synthetic data: what it means, how we might evaluate it and how we might use it. |
1910.08888 | Carlo Zaniolo | Carlo Zaniolo, Ariyam Das, Jiaqi Gu, Youfu Li, Mingda li, Jin Wang | Monotonic Properties of Completed Aggregates in Recursive Queries | null | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The use of aggregates in recursion enables efficient and scalable support for
a wide range of BigData algorithms, including those used in graph applications,
KDD applications, and ML applications, which have proven difficult to be
expressed and supported efficiently in BigData systems supporting Datalog or
SQL. The problem with these languages and systems is that, to avoid the
semantic and computational issues created by non-monotonic constructs in
recursion, they only allow programs that are stratified with respect to
negation and aggregates. Now, while this crippling restriction is
well-justified for negation, it is frequently unjustified for aggregates, since
(i) aggregates are often monotonic in the standard lattice of set-containment,
(ii) the PreM property guarantees that programs with extrema in recursion are
equivalent to stratified programs where extrema are used as post-constraints,
and (iii) any program computing any aggregates on sets of facts of predictable
cardinality tantamounts to stratified programs where the precomputation of the
cardinality of the set is followed by a stratum where recursive rules only use
monotonic constructs. With (i) and (ii) covered in previous papers, this paper
focuses on (iii) using examples of great practical interest. For such examples,
we provide a formal semantics that is conducive to efficient and scalable
implementations via well-known techniques such as semi-naive fixpoint currently
supported by most Datalog and SQL3 systems.
| [
{
"created": "Sun, 20 Oct 2019 03:52:40 GMT",
"version": "v1"
}
] | 2019-10-22 | [
[
"Zaniolo",
"Carlo",
""
],
[
"Das",
"Ariyam",
""
],
[
"Gu",
"Jiaqi",
""
],
[
"Li",
"Youfu",
""
],
[
"li",
"Mingda",
""
],
[
"Wang",
"Jin",
""
]
] | The use of aggregates in recursion enables efficient and scalable support for a wide range of BigData algorithms, including those used in graph applications, KDD applications, and ML applications, which have proven difficult to be expressed and supported efficiently in BigData systems supporting Datalog or SQL. The problem with these languages and systems is that, to avoid the semantic and computational issues created by non-monotonic constructs in recursion, they only allow programs that are stratified with respect to negation and aggregates. Now, while this crippling restriction is well-justified for negation, it is frequently unjustified for aggregates, since (i) aggregates are often monotonic in the standard lattice of set-containment, (ii) the PreM property guarantees that programs with extrema in recursion are equivalent to stratified programs where extrema are used as post-constraints, and (iii) any program computing any aggregates on sets of facts of predictable cardinality tantamounts to stratified programs where the precomputation of the cardinality of the set is followed by a stratum where recursive rules only use monotonic constructs. With (i) and (ii) covered in previous papers, this paper focuses on (iii) using examples of great practical interest. For such examples, we provide a formal semantics that is conducive to efficient and scalable implementations via well-known techniques such as semi-naive fixpoint currently supported by most Datalog and SQL3 systems. |
2407.15865 | Craig Pirie | Craig Pirie, Harsha Kalutarage, Muhammad Shadi Hajar, Nirmalie
Wiratunga, Subodha Charles, Geeth Sandaru Madhushan, Priyantha Buddhika,
Supun Wijesiriwardana, Akila Dimantha, Kithdara Hansamal, Shalitha
Pathiranage | A Survey of AI-Powered Mini-Grid Solutions for a Sustainable Future in
Rural Communities | null | null | null | null | cs.LG cs.AI cs.CE | http://creativecommons.org/licenses/by/4.0/ | This paper presents a comprehensive survey of AI-driven mini-grid solutions
aimed at enhancing sustainable energy access. It emphasises the potential of
mini-grids, which can operate independently or in conjunction with national
power grids, to provide reliable and affordable electricity to remote
communities. Given the inherent unpredictability of renewable energy sources
such as solar and wind, the necessity for accurate energy forecasting and
management is discussed, highlighting the role of advanced AI techniques in
forecasting energy supply and demand, optimising grid operations, and ensuring
sustainable energy distribution. This paper reviews various forecasting models,
including statistical methods, machine learning algorithms, and hybrid
approaches, evaluating their effectiveness for both short-term and long-term
predictions. Additionally, it explores public datasets and tools such as
Prophet, NeuralProphet, and N-BEATS for model implementation and validation.
The survey concludes with recommendations for future research, addressing
challenges in model adaptation and optimisation for real-world applications.
| [
{
"created": "Wed, 17 Jul 2024 20:23:38 GMT",
"version": "v1"
}
] | 2024-07-24 | [
[
"Pirie",
"Craig",
""
],
[
"Kalutarage",
"Harsha",
""
],
[
"Hajar",
"Muhammad Shadi",
""
],
[
"Wiratunga",
"Nirmalie",
""
],
[
"Charles",
"Subodha",
""
],
[
"Madhushan",
"Geeth Sandaru",
""
],
[
"Buddhika",
"Priyantha",
""
],
[
"Wijesiriwardana",
"Supun",
""
],
[
"Dimantha",
"Akila",
""
],
[
"Hansamal",
"Kithdara",
""
],
[
"Pathiranage",
"Shalitha",
""
]
] | This paper presents a comprehensive survey of AI-driven mini-grid solutions aimed at enhancing sustainable energy access. It emphasises the potential of mini-grids, which can operate independently or in conjunction with national power grids, to provide reliable and affordable electricity to remote communities. Given the inherent unpredictability of renewable energy sources such as solar and wind, the necessity for accurate energy forecasting and management is discussed, highlighting the role of advanced AI techniques in forecasting energy supply and demand, optimising grid operations, and ensuring sustainable energy distribution. This paper reviews various forecasting models, including statistical methods, machine learning algorithms, and hybrid approaches, evaluating their effectiveness for both short-term and long-term predictions. Additionally, it explores public datasets and tools such as Prophet, NeuralProphet, and N-BEATS for model implementation and validation. The survey concludes with recommendations for future research, addressing challenges in model adaptation and optimisation for real-world applications. |
2304.12152 | Haitian Jiang | Haitian Jiang, Dongliang Xiong, Xiaowen Jiang, Li Ding, Liang Chen,
Kai Huang | Efficient Halftoning via Deep Reinforcement Learning | null | IEEE Transactions on Image Processing (TIP), 2023 | 10.1109/TIP.2023.3318937 | null | cs.CV cs.GR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Halftoning aims to reproduce a continuous-tone image with pixels whose
intensities are constrained to two discrete levels. This technique has been
deployed on every printer, and the majority of them adopt fast methods (e.g.,
ordered dithering, error diffusion) that fail to render structural details,
which determine halftone's quality. Other prior methods of pursuing visual
pleasure by searching for the optimal halftone solution, on the contrary,
suffer from their high computational cost. In this paper, we propose a fast and
structure-aware halftoning method via a data-driven approach. Specifically, we
formulate halftoning as a reinforcement learning problem, in which each binary
pixel's value is regarded as an action chosen by a virtual agent with a shared
fully convolutional neural network (CNN) policy. In the offline phase, an
effective gradient estimator is utilized to train the agents in producing
high-quality halftones in one action step. Then, halftones can be generated
online by one fast CNN inference. Besides, we propose a novel anisotropy
suppressing loss function, which brings the desirable blue-noise property.
Finally, we find that optimizing SSIM could result in holes in flat areas,
which can be avoided by weighting the metric with the contone's contrast map.
Experiments show that our framework can effectively train a light-weight CNN,
which is 15x faster than previous structure-aware methods, to generate
blue-noise halftones with satisfactory visual quality. We also present a
prototype of deep multitoning to demonstrate the extensibility of our method.
| [
{
"created": "Mon, 24 Apr 2023 15:03:37 GMT",
"version": "v1"
},
{
"created": "Fri, 13 Oct 2023 03:40:42 GMT",
"version": "v2"
}
] | 2023-10-16 | [
[
"Jiang",
"Haitian",
""
],
[
"Xiong",
"Dongliang",
""
],
[
"Jiang",
"Xiaowen",
""
],
[
"Ding",
"Li",
""
],
[
"Chen",
"Liang",
""
],
[
"Huang",
"Kai",
""
]
] | Halftoning aims to reproduce a continuous-tone image with pixels whose intensities are constrained to two discrete levels. This technique has been deployed on every printer, and the majority of them adopt fast methods (e.g., ordered dithering, error diffusion) that fail to render structural details, which determine halftone's quality. Other prior methods of pursuing visual pleasure by searching for the optimal halftone solution, on the contrary, suffer from their high computational cost. In this paper, we propose a fast and structure-aware halftoning method via a data-driven approach. Specifically, we formulate halftoning as a reinforcement learning problem, in which each binary pixel's value is regarded as an action chosen by a virtual agent with a shared fully convolutional neural network (CNN) policy. In the offline phase, an effective gradient estimator is utilized to train the agents in producing high-quality halftones in one action step. Then, halftones can be generated online by one fast CNN inference. Besides, we propose a novel anisotropy suppressing loss function, which brings the desirable blue-noise property. Finally, we find that optimizing SSIM could result in holes in flat areas, which can be avoided by weighting the metric with the contone's contrast map. Experiments show that our framework can effectively train a light-weight CNN, which is 15x faster than previous structure-aware methods, to generate blue-noise halftones with satisfactory visual quality. We also present a prototype of deep multitoning to demonstrate the extensibility of our method. |
2011.14058 | Wei He | Zhongzhan Huang, Senwei Liang, Mingfu Liang, Wei He, Haizhao Yang | Efficient Attention Network: Accelerate Attention by Searching Where to
Plug | null | null | null | null | cs.CV cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, many plug-and-play self-attention modules are proposed to enhance
the model generalization by exploiting the internal information of deep
convolutional neural networks (CNNs). Previous works lay an emphasis on the
design of attention module for specific functionality, e.g., light-weighted or
task-oriented attention. However, they ignore the importance of where to plug
in the attention module since they connect the modules individually with each
block of the entire CNN backbone for granted, leading to incremental
computational cost and number of parameters with the growth of network depth.
Thus, we propose a framework called Efficient Attention Network (EAN) to
improve the efficiency for the existing attention modules. In EAN, we leverage
the sharing mechanism (Huang et al. 2020) to share the attention module within
the backbone and search where to connect the shared attention module via
reinforcement learning. Finally, we obtain the attention network with sparse
connections between the backbone and modules, while (1) maintaining accuracy
(2) reducing extra parameter increment and (3) accelerating inference.
Extensive experiments on widely-used benchmarks and popular attention networks
show the effectiveness of EAN. Furthermore, we empirically illustrate that our
EAN has the capacity of transferring to other tasks and capturing the
informative features. The code is available at
https://github.com/gbup-group/EAN-efficient-attention-network.
| [
{
"created": "Sat, 28 Nov 2020 03:31:08 GMT",
"version": "v1"
},
{
"created": "Sun, 11 Jul 2021 12:44:58 GMT",
"version": "v2"
}
] | 2021-07-13 | [
[
"Huang",
"Zhongzhan",
""
],
[
"Liang",
"Senwei",
""
],
[
"Liang",
"Mingfu",
""
],
[
"He",
"Wei",
""
],
[
"Yang",
"Haizhao",
""
]
] | Recently, many plug-and-play self-attention modules are proposed to enhance the model generalization by exploiting the internal information of deep convolutional neural networks (CNNs). Previous works lay an emphasis on the design of attention module for specific functionality, e.g., light-weighted or task-oriented attention. However, they ignore the importance of where to plug in the attention module since they connect the modules individually with each block of the entire CNN backbone for granted, leading to incremental computational cost and number of parameters with the growth of network depth. Thus, we propose a framework called Efficient Attention Network (EAN) to improve the efficiency for the existing attention modules. In EAN, we leverage the sharing mechanism (Huang et al. 2020) to share the attention module within the backbone and search where to connect the shared attention module via reinforcement learning. Finally, we obtain the attention network with sparse connections between the backbone and modules, while (1) maintaining accuracy (2) reducing extra parameter increment and (3) accelerating inference. Extensive experiments on widely-used benchmarks and popular attention networks show the effectiveness of EAN. Furthermore, we empirically illustrate that our EAN has the capacity of transferring to other tasks and capturing the informative features. The code is available at https://github.com/gbup-group/EAN-efficient-attention-network. |
2307.12517 | Pourya Shamsolmoali | Pourya Shamsolmoali, Masoumeh Zareapoor | Entropy Transformer Networks: A Learning Approach via Tangent Bundle
Data Manifold | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | This paper focuses on an accurate and fast interpolation approach for image
transformation employed in the design of CNN architectures. Standard Spatial
Transformer Networks (STNs) use bilinear or linear interpolation as their
interpolation, with unrealistic assumptions about the underlying data
distributions, which leads to poor performance under scale variations.
Moreover, STNs do not preserve the norm of gradients in propagation due to
their dependency on sparse neighboring pixels. To address this problem, a novel
Entropy STN (ESTN) is proposed that interpolates on the data manifold
distributions. In particular, random samples are generated for each pixel in
association with the tangent space of the data manifold and construct a linear
approximation of their intensity values with an entropy regularizer to compute
the transformer parameters. A simple yet effective technique is also proposed
to normalize the non-zero values of the convolution operation, to fine-tune the
layers for gradients' norm-regularization during training. Experiments on
challenging benchmarks show that the proposed ESTN can improve predictive
accuracy over a range of computer vision tasks, including image reconstruction,
and classification, while reducing the computational cost.
| [
{
"created": "Mon, 24 Jul 2023 04:21:51 GMT",
"version": "v1"
}
] | 2023-07-25 | [
[
"Shamsolmoali",
"Pourya",
""
],
[
"Zareapoor",
"Masoumeh",
""
]
] | This paper focuses on an accurate and fast interpolation approach for image transformation employed in the design of CNN architectures. Standard Spatial Transformer Networks (STNs) use bilinear or linear interpolation as their interpolation, with unrealistic assumptions about the underlying data distributions, which leads to poor performance under scale variations. Moreover, STNs do not preserve the norm of gradients in propagation due to their dependency on sparse neighboring pixels. To address this problem, a novel Entropy STN (ESTN) is proposed that interpolates on the data manifold distributions. In particular, random samples are generated for each pixel in association with the tangent space of the data manifold and construct a linear approximation of their intensity values with an entropy regularizer to compute the transformer parameters. A simple yet effective technique is also proposed to normalize the non-zero values of the convolution operation, to fine-tune the layers for gradients' norm-regularization during training. Experiments on challenging benchmarks show that the proposed ESTN can improve predictive accuracy over a range of computer vision tasks, including image reconstruction, and classification, while reducing the computational cost. |
1605.04359 | Aman Madaan | Aman Madaan, Sunita Sarawagi | Occurrence Statistics of Entities, Relations and Types on the Web | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The problem of collecting reliable estimates of occurrence of entities on the
open web forms the premise for this report. The models learned for tagging
entities cannot be expected to perform well when deployed on the web. This is
owing to the severe mismatch in the distributions of such entities on the web
and in the relatively diminutive training data. In this report, we build up the
case for maximum mean discrepancy for estimation of occurrence statistics of
entities on the web, taking a review of named entity disambiguation techniques
and related concepts along the way.
| [
{
"created": "Sat, 14 May 2016 01:13:48 GMT",
"version": "v1"
}
] | 2016-05-17 | [
[
"Madaan",
"Aman",
""
],
[
"Sarawagi",
"Sunita",
""
]
] | The problem of collecting reliable estimates of occurrence of entities on the open web forms the premise for this report. The models learned for tagging entities cannot be expected to perform well when deployed on the web. This is owing to the severe mismatch in the distributions of such entities on the web and in the relatively diminutive training data. In this report, we build up the case for maximum mean discrepancy for estimation of occurrence statistics of entities on the web, taking a review of named entity disambiguation techniques and related concepts along the way. |
2002.04711 | Said Hanafi | Fred Glover, Said Hanafi, and Gintaras Palubeckis | Bi-objective Optimization of Biclustering with Binary Data | 37 pages | null | null | null | cs.AI cs.DM math.CO math.OC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Clustering consists of partitioning data objects into subsets called clusters
according to some similarity criteria. This paper addresses a generalization
called quasi-clustering that allows overlapping of clusters, and which we link
to biclustering. Biclustering simultaneously groups the objects and features so
that a specific group of objects has a special group of features. In recent
years, biclustering has received a lot of attention in several practical
applications. In this paper we consider a bi-objective optimization of
biclustering problem with binary data. First we present an integer programing
formulations for the bi-objective optimization biclustering. Next we propose a
constructive heuristic based on the set intersection operation and its
efficient implementation for solving a series of mono-objective problems used
inside the Epsilon-constraint method (obtained by keeping only one objective
function and the other objective function is integrated into constraints).
Finally, our experimental results show that using CPLEX solver as an exact
algorithm for finding an optimal solution drastically increases the
computational cost for large instances, while our proposed heuristic provides
very good results and significantly reduces the computational expense.
| [
{
"created": "Sun, 9 Feb 2020 21:49:26 GMT",
"version": "v1"
}
] | 2020-02-13 | [
[
"Glover",
"Fred",
""
],
[
"Hanafi",
"Said",
""
],
[
"Palubeckis",
"Gintaras",
""
]
] | Clustering consists of partitioning data objects into subsets called clusters according to some similarity criteria. This paper addresses a generalization called quasi-clustering that allows overlapping of clusters, and which we link to biclustering. Biclustering simultaneously groups the objects and features so that a specific group of objects has a special group of features. In recent years, biclustering has received a lot of attention in several practical applications. In this paper we consider a bi-objective optimization of biclustering problem with binary data. First we present an integer programing formulations for the bi-objective optimization biclustering. Next we propose a constructive heuristic based on the set intersection operation and its efficient implementation for solving a series of mono-objective problems used inside the Epsilon-constraint method (obtained by keeping only one objective function and the other objective function is integrated into constraints). Finally, our experimental results show that using CPLEX solver as an exact algorithm for finding an optimal solution drastically increases the computational cost for large instances, while our proposed heuristic provides very good results and significantly reduces the computational expense. |
2108.02694 | Liming Xu | Liming Xu, Dave Towey, Andrew French, Steve Benford, Zhi Quan Zhou and
Tsong Yueh Chen | Using Metamorphic Relations to Verify and Enhance Artcode Classification | 32 pages, 11 figures | null | null | null | cs.SE cs.LG | http://creativecommons.org/licenses/by/4.0/ | Software testing is often hindered where it is impossible or impractical to
determine the correctness of the behaviour or output of the software under test
(SUT), a situation known as the oracle problem. An example of an area facing
the oracle problem is automatic image classification, using machine learning to
classify an input image as one of a set of predefined classes. An approach to
software testing that alleviates the oracle problem is metamorphic testing
(MT). While traditional software testing examines the correctness of individual
test cases, MT instead examines the relations amongst multiple executions of
test cases and their outputs. These relations are called metamorphic relations
(MRs): if an MR is found to be violated, then a fault must exist in the SUT.
This paper examines the problem of classifying images containing visually
hidden markers called Artcodes, and applies MT to verify and enhance the
trained classifiers. This paper further examines two MRs, Separation and
Occlusion, and reports on their capability in verifying the image
classification using one-way analysis of variance (ANOVA) in conjunction with
three other statistical analysis methods: t-test (for unequal variances),
Kruskal-Wallis test, and Dunnett's test. In addition to our previously-studied
classifier, that used Random Forests, we introduce a new classifier that uses a
support vector machine, and present its MR-augmented version. Experimental
evaluations across a number of performance metrics show that the augmented
classifiers can achieve better performance than non-augmented classifiers. This
paper also analyses how the enhanced performance is obtained.
| [
{
"created": "Thu, 5 Aug 2021 15:54:56 GMT",
"version": "v1"
}
] | 2021-08-06 | [
[
"Xu",
"Liming",
""
],
[
"Towey",
"Dave",
""
],
[
"French",
"Andrew",
""
],
[
"Benford",
"Steve",
""
],
[
"Zhou",
"Zhi Quan",
""
],
[
"Chen",
"Tsong Yueh",
""
]
] | Software testing is often hindered where it is impossible or impractical to determine the correctness of the behaviour or output of the software under test (SUT), a situation known as the oracle problem. An example of an area facing the oracle problem is automatic image classification, using machine learning to classify an input image as one of a set of predefined classes. An approach to software testing that alleviates the oracle problem is metamorphic testing (MT). While traditional software testing examines the correctness of individual test cases, MT instead examines the relations amongst multiple executions of test cases and their outputs. These relations are called metamorphic relations (MRs): if an MR is found to be violated, then a fault must exist in the SUT. This paper examines the problem of classifying images containing visually hidden markers called Artcodes, and applies MT to verify and enhance the trained classifiers. This paper further examines two MRs, Separation and Occlusion, and reports on their capability in verifying the image classification using one-way analysis of variance (ANOVA) in conjunction with three other statistical analysis methods: t-test (for unequal variances), Kruskal-Wallis test, and Dunnett's test. In addition to our previously-studied classifier, that used Random Forests, we introduce a new classifier that uses a support vector machine, and present its MR-augmented version. Experimental evaluations across a number of performance metrics show that the augmented classifiers can achieve better performance than non-augmented classifiers. This paper also analyses how the enhanced performance is obtained. |
2305.10110 | Wenzhao Zhao | Wenzhao Zhao, Barbara D. Wichtmann, Steffen Albert, Angelika Maurer,
Frank G. Z\"ollner, Ulrike Attenberger and J\"urgen Hesser | Adaptive aggregation of Monte Carlo augmented decomposed filters for
efficient group-equivariant convolutional neural network | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Group-equivariant convolutional neural networks (G-CNN) heavily rely on
parameter sharing to increase CNN's data efficiency and performance. However,
the parameter-sharing strategy greatly increases the computational burden for
each added parameter, which hampers its application to deep neural network
models. In this paper, we address these problems by proposing a
non-parameter-sharing approach for group equivariant neural networks. The
proposed methods adaptively aggregate a diverse range of filters by a weighted
sum of stochastically augmented decomposed filters. We give theoretical proof
about how the continuous group convolution can be approximated by our methods.
Our method applies to both continuous and discrete groups, where the
augmentation is implemented using Monte Carlo sampling and bootstrap
resampling, respectively. We demonstrate that our methods serve as an efficient
extension of standard CNN. Experiments on group equivariance tests show how our
methods can achieve superior performance to parameter-sharing group equivariant
networks. Experiments on image classification and image denoising tasks show
that in certain scenarios, with a suitable set of filter bases, our method
helps improve the performance of standard CNNs and build efficient lightweight
image denoising networks. The code will be available at
https://github.com/ZhaoWenzhao/MCG_CNN.
| [
{
"created": "Wed, 17 May 2023 10:18:02 GMT",
"version": "v1"
},
{
"created": "Sun, 4 Feb 2024 22:22:29 GMT",
"version": "v2"
},
{
"created": "Wed, 1 May 2024 21:54:24 GMT",
"version": "v3"
}
] | 2024-05-03 | [
[
"Zhao",
"Wenzhao",
""
],
[
"Wichtmann",
"Barbara D.",
""
],
[
"Albert",
"Steffen",
""
],
[
"Maurer",
"Angelika",
""
],
[
"Zöllner",
"Frank G.",
""
],
[
"Attenberger",
"Ulrike",
""
],
[
"Hesser",
"Jürgen",
""
]
] | Group-equivariant convolutional neural networks (G-CNN) heavily rely on parameter sharing to increase CNN's data efficiency and performance. However, the parameter-sharing strategy greatly increases the computational burden for each added parameter, which hampers its application to deep neural network models. In this paper, we address these problems by proposing a non-parameter-sharing approach for group equivariant neural networks. The proposed methods adaptively aggregate a diverse range of filters by a weighted sum of stochastically augmented decomposed filters. We give theoretical proof about how the continuous group convolution can be approximated by our methods. Our method applies to both continuous and discrete groups, where the augmentation is implemented using Monte Carlo sampling and bootstrap resampling, respectively. We demonstrate that our methods serve as an efficient extension of standard CNN. Experiments on group equivariance tests show how our methods can achieve superior performance to parameter-sharing group equivariant networks. Experiments on image classification and image denoising tasks show that in certain scenarios, with a suitable set of filter bases, our method helps improve the performance of standard CNNs and build efficient lightweight image denoising networks. The code will be available at https://github.com/ZhaoWenzhao/MCG_CNN. |
2004.10878 | Sujit Bhattacharya Professor | Sujit Bhattacharya and Shubham Singh | Visible Insights of the Invisible Pandemic: A Scientometric, Altmetric
and Topic Trend Analysis | 21 pages, 4 Figures and 4 tables | null | null | null | cs.DL cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The recent SARS-COV-2 virus outbreak has created an unprecedented global
health crisis! The disease is showing alarming trends with the number of people
getting infected with this disease, new cases and death rate are all
highlighting the need to control this disease at the earliest. The strategy now
for the governments around the globe is how to limit the spread of the virus
until the research community develops treatment/drug or vaccination against the
virus. The outbreak of this disease has unsurprisingly led to huge volume of
research within a short period of time surrounding this disease. It has also
led to aggressive social media activity on twitter, Facebook, dedicated blogs,
news reports and other online sites actively involved in discussing about the
various aspects of and related to this disease. It becomes a useful and
challenging exercise to draw from this huge volume of research, the key papers
that form the research front, its influence in the research community, and
other important research insights. Similarly, it becomes important to discern
the key issues that influence the society concerning this disease. The paper is
motivated by this. It attempts to distinguish which are the most influential
papers, the key knowledge base and major topics surrounding the research
covered by COVID-19. Further it attempts to capture the society's perception by
discerning key topics that are trending online. The study concludes by
highlighting the implications of this study.
| [
{
"created": "Wed, 22 Apr 2020 21:53:15 GMT",
"version": "v1"
}
] | 2020-04-24 | [
[
"Bhattacharya",
"Sujit",
""
],
[
"Singh",
"Shubham",
""
]
] | The recent SARS-COV-2 virus outbreak has created an unprecedented global health crisis! The disease is showing alarming trends with the number of people getting infected with this disease, new cases and death rate are all highlighting the need to control this disease at the earliest. The strategy now for the governments around the globe is how to limit the spread of the virus until the research community develops treatment/drug or vaccination against the virus. The outbreak of this disease has unsurprisingly led to huge volume of research within a short period of time surrounding this disease. It has also led to aggressive social media activity on twitter, Facebook, dedicated blogs, news reports and other online sites actively involved in discussing about the various aspects of and related to this disease. It becomes a useful and challenging exercise to draw from this huge volume of research, the key papers that form the research front, its influence in the research community, and other important research insights. Similarly, it becomes important to discern the key issues that influence the society concerning this disease. The paper is motivated by this. It attempts to distinguish which are the most influential papers, the key knowledge base and major topics surrounding the research covered by COVID-19. Further it attempts to capture the society's perception by discerning key topics that are trending online. The study concludes by highlighting the implications of this study. |
2111.06334 | Sarthak Khanal | Sarthak Khanal, Maria Traskowsky, Doina Caragea | Identification of Fine-Grained Location Mentions in Crisis Tweets | null | null | null | null | cs.CL cs.LG | http://creativecommons.org/licenses/by/4.0/ | Identification of fine-grained location mentions in crisis tweets is central
in transforming situational awareness information extracted from social media
into actionable information. Most prior works have focused on identifying
generic locations, without considering their specific types. To facilitate
progress on the fine-grained location identification task, we assemble two
tweet crisis datasets and manually annotate them with specific location types.
The first dataset contains tweets from a mixed set of crisis events, while the
second dataset contains tweets from the global COVID-19 pandemic. We
investigate the performance of state-of-the-art deep learning models for
sequence tagging on these datasets, in both in-domain and cross-domain
settings.
| [
{
"created": "Thu, 11 Nov 2021 17:48:03 GMT",
"version": "v1"
}
] | 2021-11-12 | [
[
"Khanal",
"Sarthak",
""
],
[
"Traskowsky",
"Maria",
""
],
[
"Caragea",
"Doina",
""
]
] | Identification of fine-grained location mentions in crisis tweets is central in transforming situational awareness information extracted from social media into actionable information. Most prior works have focused on identifying generic locations, without considering their specific types. To facilitate progress on the fine-grained location identification task, we assemble two tweet crisis datasets and manually annotate them with specific location types. The first dataset contains tweets from a mixed set of crisis events, while the second dataset contains tweets from the global COVID-19 pandemic. We investigate the performance of state-of-the-art deep learning models for sequence tagging on these datasets, in both in-domain and cross-domain settings. |
cs/0407036 | David Eppstein | David Eppstein | All Maximal Independent Sets and Dynamic Dominance for Sparse Graphs | 10 pages | ACM Trans. Algorithms 5(4):A38, 2009 | 10.1145/1597036.1597042 | null | cs.DS | null | We describe algorithms, based on Avis and Fukuda's reverse search paradigm,
for listing all maximal independent sets in a sparse graph in polynomial time
and delay per output. For bounded degree graphs, our algorithms take constant
time per set generated; for minor-closed graph families, the time is O(n) per
set, and for more general sparse graph families we achieve subquadratic time
per set. We also describe new data structures for maintaining a dynamic vertex
set S in a sparse or minor-closed graph family, and querying the number of
vertices not dominated by S; for minor-closed graph families the time per
update is constant, while it is sublinear for any sparse graph family. We can
also maintain a dynamic vertex set in an arbitrary m-edge graph and test the
independence of the maintained set in time O(sqrt m) per update. We use the
domination data structures as part of our enumeration algorithms.
| [
{
"created": "Thu, 15 Jul 2004 21:04:45 GMT",
"version": "v1"
}
] | 2010-01-11 | [
[
"Eppstein",
"David",
""
]
] | We describe algorithms, based on Avis and Fukuda's reverse search paradigm, for listing all maximal independent sets in a sparse graph in polynomial time and delay per output. For bounded degree graphs, our algorithms take constant time per set generated; for minor-closed graph families, the time is O(n) per set, and for more general sparse graph families we achieve subquadratic time per set. We also describe new data structures for maintaining a dynamic vertex set S in a sparse or minor-closed graph family, and querying the number of vertices not dominated by S; for minor-closed graph families the time per update is constant, while it is sublinear for any sparse graph family. We can also maintain a dynamic vertex set in an arbitrary m-edge graph and test the independence of the maintained set in time O(sqrt m) per update. We use the domination data structures as part of our enumeration algorithms. |
2104.11298 | Siddhartha Jayanti | Siddhartha Jayanti | Nash Equilibria of The Multiplayer Colonel Blotto Game on Arbitrary
Measure Spaces | 19 pages | null | null | null | cs.GT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Colonel Blotto Problem proposed by Borel in 1921 has served as a widely
applicable model of budget-constrained simultaneous winner-take-all
competitions in the social sciences. Applications include elections,
advertising, R&D and more. However, the classic Blotto problem and variants
limit the study to competitions over a finite set of discrete battlefields. In
this paper, we extend the classical theory to study multiplayer Blotto games
over arbitrary measurable battlegrounds, provide an algorithm to efficiently
sample equilibria of symmetric "equipartionable" Generalized Blotto games, and
characterize the symmetric fair equilibria of the Blotto game over the unit
interval.
| [
{
"created": "Thu, 22 Apr 2021 19:52:47 GMT",
"version": "v1"
}
] | 2021-04-26 | [
[
"Jayanti",
"Siddhartha",
""
]
] | The Colonel Blotto Problem proposed by Borel in 1921 has served as a widely applicable model of budget-constrained simultaneous winner-take-all competitions in the social sciences. Applications include elections, advertising, R&D and more. However, the classic Blotto problem and variants limit the study to competitions over a finite set of discrete battlefields. In this paper, we extend the classical theory to study multiplayer Blotto games over arbitrary measurable battlegrounds, provide an algorithm to efficiently sample equilibria of symmetric "equipartionable" Generalized Blotto games, and characterize the symmetric fair equilibria of the Blotto game over the unit interval. |
2212.14129 | Bryan Ford | Bryan Ford | Matchertext: Towards Verbatim Interlanguage Embedding | 23 pages, 4 figures, 2 tables | null | null | null | cs.PL | http://creativecommons.org/licenses/by/4.0/ | Embedding text in one language within text of another is commonplace for
numerous purposes, but usually requires tedious and error-prone "escaping"
transformations on the embedded string. We propose a simple cross-language
syntactic discipline, matchertext, which enables the safe embedding a string in
any compliant language into a string in any other language via simple
"copy-and-paste" - in particular with no escaping, obfuscation, or expansion of
embedded strings. We apply this syntactic discipline to several common and
frequently-embedded language syntaxes such as URIs, HTML, and JavaScript,
exploring the benefits, costs, and compatibility issues in adopting the
proposed matchertext discipline. One early matchertext-based language is MinML,
a concise but general alternative syntax for writing HTML or XML.
| [
{
"created": "Thu, 29 Dec 2022 00:10:31 GMT",
"version": "v1"
}
] | 2023-01-02 | [
[
"Ford",
"Bryan",
""
]
] | Embedding text in one language within text of another is commonplace for numerous purposes, but usually requires tedious and error-prone "escaping" transformations on the embedded string. We propose a simple cross-language syntactic discipline, matchertext, which enables the safe embedding a string in any compliant language into a string in any other language via simple "copy-and-paste" - in particular with no escaping, obfuscation, or expansion of embedded strings. We apply this syntactic discipline to several common and frequently-embedded language syntaxes such as URIs, HTML, and JavaScript, exploring the benefits, costs, and compatibility issues in adopting the proposed matchertext discipline. One early matchertext-based language is MinML, a concise but general alternative syntax for writing HTML or XML. |
2311.04076 | Lindia Tjuatja | Lindia Tjuatja, Valerie Chen, Sherry Tongshuang Wu, Ameet Talwalkar,
Graham Neubig | Do LLMs exhibit human-like response biases? A case study in survey
design | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | As large language models (LLMs) become more capable, there is growing
excitement about the possibility of using LLMs as proxies for humans in
real-world tasks where subjective labels are desired, such as in surveys and
opinion polling. One widely-cited barrier to the adoption of LLMs as proxies
for humans in subjective tasks is their sensitivity to prompt wording - but
interestingly, humans also display sensitivities to instruction changes in the
form of response biases. We investigate the extent to which LLMs reflect human
response biases, if at all. We look to survey design, where human response
biases caused by changes in the wordings of "prompts" have been extensively
explored in social psychology literature. Drawing from these works, we design a
dataset and framework to evaluate whether LLMs exhibit human-like response
biases in survey questionnaires. Our comprehensive evaluation of nine models
shows that popular open and commercial LLMs generally fail to reflect
human-like behavior, particularly in models that have undergone RLHF.
Furthermore, even if a model shows a significant change in the same direction
as humans, we find that they are sensitive to perturbations that do not elicit
significant changes in humans. These results highlight the pitfalls of using
LLMs as human proxies, and underscore the need for finer-grained
characterizations of model behavior. Our code, dataset, and collected samples
are available at https://github.com/lindiatjuatja/BiasMonkey
| [
{
"created": "Tue, 7 Nov 2023 15:40:43 GMT",
"version": "v1"
},
{
"created": "Wed, 29 Nov 2023 22:00:12 GMT",
"version": "v2"
},
{
"created": "Mon, 15 Jan 2024 17:52:31 GMT",
"version": "v3"
},
{
"created": "Mon, 5 Feb 2024 15:12:06 GMT",
"version": "v4"
},
{
"created": "Tue, 6 Feb 2024 04:16:17 GMT",
"version": "v5"
}
] | 2024-02-07 | [
[
"Tjuatja",
"Lindia",
""
],
[
"Chen",
"Valerie",
""
],
[
"Wu",
"Sherry Tongshuang",
""
],
[
"Talwalkar",
"Ameet",
""
],
[
"Neubig",
"Graham",
""
]
] | As large language models (LLMs) become more capable, there is growing excitement about the possibility of using LLMs as proxies for humans in real-world tasks where subjective labels are desired, such as in surveys and opinion polling. One widely-cited barrier to the adoption of LLMs as proxies for humans in subjective tasks is their sensitivity to prompt wording - but interestingly, humans also display sensitivities to instruction changes in the form of response biases. We investigate the extent to which LLMs reflect human response biases, if at all. We look to survey design, where human response biases caused by changes in the wordings of "prompts" have been extensively explored in social psychology literature. Drawing from these works, we design a dataset and framework to evaluate whether LLMs exhibit human-like response biases in survey questionnaires. Our comprehensive evaluation of nine models shows that popular open and commercial LLMs generally fail to reflect human-like behavior, particularly in models that have undergone RLHF. Furthermore, even if a model shows a significant change in the same direction as humans, we find that they are sensitive to perturbations that do not elicit significant changes in humans. These results highlight the pitfalls of using LLMs as human proxies, and underscore the need for finer-grained characterizations of model behavior. Our code, dataset, and collected samples are available at https://github.com/lindiatjuatja/BiasMonkey |
cs/0606057 | Fredrik Kuivinen | Fredrik Kuivinen | Approximability of Bounded Occurrence Max Ones | Accepted to MFCS 2006 | null | null | null | cs.CC | null | We study the approximability of Max Ones when the number of variable
occurrences is bounded by a constant. For conservative constraint languages
(i.e., when the unary relations are included) we give a complete classification
when the number of occurrences is three or more and a partial classification
when the bound is two.
For the non-conservative case we prove that it is either trivial or
equivalent to the corresponding conservative problem under polynomial-time
many-one reductions.
| [
{
"created": "Tue, 13 Jun 2006 06:44:21 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Kuivinen",
"Fredrik",
""
]
] | We study the approximability of Max Ones when the number of variable occurrences is bounded by a constant. For conservative constraint languages (i.e., when the unary relations are included) we give a complete classification when the number of occurrences is three or more and a partial classification when the bound is two. For the non-conservative case we prove that it is either trivial or equivalent to the corresponding conservative problem under polynomial-time many-one reductions. |
2103.11528 | Son T. Luu | Son T. Luu, Kiet Van Nguyen and Ngan Luu-Thuy Nguyen | A Large-scale Dataset for Hate Speech Detection on Vietnamese Social
Media Texts | IEA/AIE 2021: Advances and Trends in Artificial Intelligence.
Artificial Intelligence Practices, pp 415-426 | null | 10.1007/978-3-030-79457-6_35 | null | cs.CL | http://creativecommons.org/licenses/by-nc-nd/4.0/ | In recent years, Vietnam witnesses the mass development of social network
users on different social platforms such as Facebook, Youtube, Instagram, and
Tiktok. On social medias, hate speech has become a critical problem for social
network users. To solve this problem, we introduce the ViHSD - a
human-annotated dataset for automatically detecting hate speech on the social
network. This dataset contains over 30,000 comments, each comment in the
dataset has one of three labels: CLEAN, OFFENSIVE, or HATE. Besides, we
introduce the data creation process for annotating and evaluating the quality
of the dataset. Finally, we evaluated the dataset by deep learning models and
transformer models.
| [
{
"created": "Mon, 22 Mar 2021 00:55:47 GMT",
"version": "v1"
},
{
"created": "Mon, 29 Mar 2021 02:46:47 GMT",
"version": "v2"
},
{
"created": "Mon, 5 Apr 2021 09:29:18 GMT",
"version": "v3"
},
{
"created": "Tue, 20 Jul 2021 06:22:08 GMT",
"version": "v4"
}
] | 2021-07-21 | [
[
"Luu",
"Son T.",
""
],
[
"Van Nguyen",
"Kiet",
""
],
[
"Nguyen",
"Ngan Luu-Thuy",
""
]
] | In recent years, Vietnam witnesses the mass development of social network users on different social platforms such as Facebook, Youtube, Instagram, and Tiktok. On social medias, hate speech has become a critical problem for social network users. To solve this problem, we introduce the ViHSD - a human-annotated dataset for automatically detecting hate speech on the social network. This dataset contains over 30,000 comments, each comment in the dataset has one of three labels: CLEAN, OFFENSIVE, or HATE. Besides, we introduce the data creation process for annotating and evaluating the quality of the dataset. Finally, we evaluated the dataset by deep learning models and transformer models. |
2210.05391 | Ruoyu Guo | Chenxia Li, Ruoyu Guo, Jun Zhou, Mengtao An, Yuning Du, Lingfeng Zhu,
Yi Liu, Xiaoguang Hu, Dianhai Yu | PP-StructureV2: A Stronger Document Analysis System | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A large amount of document data exists in unstructured form such as raw
images without any text information. Designing a practical document image
analysis system is a meaningful but challenging task. In previous work, we
proposed an intelligent document analysis system PP-Structure. In order to
further upgrade the function and performance of PP-Structure, we propose
PP-StructureV2 in this work, which contains two subsystems: Layout Information
Extraction and Key Information Extraction. Firstly, we integrate Image
Direction Correction module and Layout Restoration module to enhance the
functionality of the system. Secondly, 8 practical strategies are utilized in
PP-StructureV2 for better performance. For Layout Analysis model, we introduce
ultra light-weight detector PP-PicoDet and knowledge distillation algorithm FGD
for model lightweighting, which increased the inference speed by 11 times with
comparable mAP. For Table Recognition model, we utilize PP-LCNet, CSP-PAN and
SLAHead to optimize the backbone module, feature fusion module and decoding
module, respectively, which improved the table structure accuracy by 6\% with
comparable inference speed. For Key Information Extraction model, we introduce
VI-LayoutXLM which is a visual-feature independent LayoutXLM architecture,
TB-YX sorting algorithm and U-DML knowledge distillation algorithm, which
brought 2.8\% and 9.1\% improvement respectively on the Hmean of Semantic
Entity Recognition and Relation Extraction tasks. All the above mentioned
models and code are open-sourced in the GitHub repository PaddleOCR.
| [
{
"created": "Tue, 11 Oct 2022 12:07:32 GMT",
"version": "v1"
},
{
"created": "Thu, 13 Oct 2022 07:11:59 GMT",
"version": "v2"
}
] | 2022-10-14 | [
[
"Li",
"Chenxia",
""
],
[
"Guo",
"Ruoyu",
""
],
[
"Zhou",
"Jun",
""
],
[
"An",
"Mengtao",
""
],
[
"Du",
"Yuning",
""
],
[
"Zhu",
"Lingfeng",
""
],
[
"Liu",
"Yi",
""
],
[
"Hu",
"Xiaoguang",
""
],
[
"Yu",
"Dianhai",
""
]
] | A large amount of document data exists in unstructured form such as raw images without any text information. Designing a practical document image analysis system is a meaningful but challenging task. In previous work, we proposed an intelligent document analysis system PP-Structure. In order to further upgrade the function and performance of PP-Structure, we propose PP-StructureV2 in this work, which contains two subsystems: Layout Information Extraction and Key Information Extraction. Firstly, we integrate Image Direction Correction module and Layout Restoration module to enhance the functionality of the system. Secondly, 8 practical strategies are utilized in PP-StructureV2 for better performance. For Layout Analysis model, we introduce ultra light-weight detector PP-PicoDet and knowledge distillation algorithm FGD for model lightweighting, which increased the inference speed by 11 times with comparable mAP. For Table Recognition model, we utilize PP-LCNet, CSP-PAN and SLAHead to optimize the backbone module, feature fusion module and decoding module, respectively, which improved the table structure accuracy by 6\% with comparable inference speed. For Key Information Extraction model, we introduce VI-LayoutXLM which is a visual-feature independent LayoutXLM architecture, TB-YX sorting algorithm and U-DML knowledge distillation algorithm, which brought 2.8\% and 9.1\% improvement respectively on the Hmean of Semantic Entity Recognition and Relation Extraction tasks. All the above mentioned models and code are open-sourced in the GitHub repository PaddleOCR. |
1912.11576 | Sheng Zhou | Yining Xu, Sheng Zhou | On the Coverage and Capacity of Ultra-Dense Networks with Directional
Transmissions | 5 pages, 4 figures, accepted by IEEE Wireless Commuincations Letters | null | null | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We investigate the performance of a downlink ultra-dense network (UDN) with
directional transmissions via stochastic geometry. Considering the dual-slope
path loss model and sectored beamforming pattern, we derive the expressions and
asymptotic characteristics of the coverage probability and constrained area
spectrum efficiency (ASE). Several special scenarios, namely the physically
feasible path loss model and adjustable beam pattern, are also analyzed.
Although signal-to-interference-plus-noise ratio collapsing still exists when
the path loss exponent in the near-field is no larger than 2, using strategies
like beam pattern adaption, can avoid the decrease of the coverage probability
and constrained ASE even when the base station density approaches infinity.
| [
{
"created": "Wed, 25 Dec 2019 01:59:04 GMT",
"version": "v1"
}
] | 2019-12-30 | [
[
"Xu",
"Yining",
""
],
[
"Zhou",
"Sheng",
""
]
] | We investigate the performance of a downlink ultra-dense network (UDN) with directional transmissions via stochastic geometry. Considering the dual-slope path loss model and sectored beamforming pattern, we derive the expressions and asymptotic characteristics of the coverage probability and constrained area spectrum efficiency (ASE). Several special scenarios, namely the physically feasible path loss model and adjustable beam pattern, are also analyzed. Although signal-to-interference-plus-noise ratio collapsing still exists when the path loss exponent in the near-field is no larger than 2, using strategies like beam pattern adaption, can avoid the decrease of the coverage probability and constrained ASE even when the base station density approaches infinity. |
1203.1754 | Marek Cygan | Marek Cygan and Marcin Pilipczuk and Micha{\l} Pilipczuk | Known algorithms for EDGE CLIQUE COVER are probably optimal | To appear in SODA 2013 | null | null | null | cs.DS cs.CC cs.DM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the EDGE CLIQUE COVER (ECC) problem, given a graph G and an integer k, we
ask whether the edges of G can be covered with k complete subgraphs of G or,
equivalently, whether G admits an intersection model on k-element universe.
Gramm et al. [JEA 2008] have shown a set of simple rules that reduce the number
of vertices of G to 2^k, and no algorithm is known with significantly better
running time bound than a brute-force search on this reduced instance. In this
paper we show that the approach of Gramm et al. is essentially optimal: we
present a polynomial time algorithm that reduces an arbitrary 3-CNF-SAT formula
with n variables and m clauses to an equivalent ECC instance (G,k) with k =
O(log n) and |V(G)| = O(n + m). Consequently, there is no 2^{2^{o(k)}}poly(n)
time algorithm for the ECC problem, unless the Exponential Time Hypothesis
fails. To the best of our knowledge, these are the first results for a natural,
fixed-parameter tractable problem, and proving that a doubly-exponential
dependency on the parameter is essentially necessary.
| [
{
"created": "Thu, 8 Mar 2012 11:19:09 GMT",
"version": "v1"
},
{
"created": "Wed, 26 Sep 2012 08:51:46 GMT",
"version": "v2"
}
] | 2012-09-27 | [
[
"Cygan",
"Marek",
""
],
[
"Pilipczuk",
"Marcin",
""
],
[
"Pilipczuk",
"Michał",
""
]
] | In the EDGE CLIQUE COVER (ECC) problem, given a graph G and an integer k, we ask whether the edges of G can be covered with k complete subgraphs of G or, equivalently, whether G admits an intersection model on k-element universe. Gramm et al. [JEA 2008] have shown a set of simple rules that reduce the number of vertices of G to 2^k, and no algorithm is known with significantly better running time bound than a brute-force search on this reduced instance. In this paper we show that the approach of Gramm et al. is essentially optimal: we present a polynomial time algorithm that reduces an arbitrary 3-CNF-SAT formula with n variables and m clauses to an equivalent ECC instance (G,k) with k = O(log n) and |V(G)| = O(n + m). Consequently, there is no 2^{2^{o(k)}}poly(n) time algorithm for the ECC problem, unless the Exponential Time Hypothesis fails. To the best of our knowledge, these are the first results for a natural, fixed-parameter tractable problem, and proving that a doubly-exponential dependency on the parameter is essentially necessary. |
2304.07493 | Cong Guo | Cong Guo, Jiaming Tang, Weiming Hu, Jingwen Leng, Chen Zhang, Fan
Yang, Yunxin Liu, Minyi Guo, Yuhao Zhu | OliVe: Accelerating Large Language Models via Hardware-friendly
Outlier-Victim Pair Quantization | ISCA 2023 | null | 10.1145/3579371.3589038 | null | cs.AR | http://creativecommons.org/licenses/by/4.0/ | Transformer-based large language models (LLMs) have achieved great success
with the growing model size. LLMs' size grows by $240\times$ every two years,
which outpaces the hardware progress and makes model inference increasingly
costly. Model quantization is a promising approach to mitigate the widening gap
between LLM size and hardware capacity. However, the existence of outliers,
values with significant magnitudes, in LLMs makes existing quantization methods
less effective. Prior outlier-aware quantization schemes adopt sparsity
encoding techniques to separate outliers from normal values where the process
requires global coordination (e.g., a global sparsity coordination list). This
incurs complex encoding/decoding hardware logics and an extra orchestration
controller for the computation between outlier and normal values. As such, it
is not hardware-efficient and hence only achieves sub-optimal quantization
benefits.
We propose OliVe, an algorithm/architecture co-designed solution that adopts
an outlier-victim pair (OVP) quantization and handles outlier values locally
with low hardware overheads and high performance gains. The key insight of
OliVe is that outliers are important while the normal values next to them are
not. Thus those normal values (called victims) can be sacrificed to accommodate
outliers. This enables a memory-aligned OVP encoding scheme, which can be
efficiently integrated to the existing hardware accelerators like systolic
array and tensor core. As a result, OliVe-based accelerator surpasses the
existing outlier-aware accelerator, GOBO, by 4.5$\times$ speedup and
4.0$\times$ energy reduction, respectively, with a superior model accuracy.
| [
{
"created": "Sat, 15 Apr 2023 07:12:05 GMT",
"version": "v1"
}
] | 2023-04-18 | [
[
"Guo",
"Cong",
""
],
[
"Tang",
"Jiaming",
""
],
[
"Hu",
"Weiming",
""
],
[
"Leng",
"Jingwen",
""
],
[
"Zhang",
"Chen",
""
],
[
"Yang",
"Fan",
""
],
[
"Liu",
"Yunxin",
""
],
[
"Guo",
"Minyi",
""
],
[
"Zhu",
"Yuhao",
""
]
] | Transformer-based large language models (LLMs) have achieved great success with the growing model size. LLMs' size grows by $240\times$ every two years, which outpaces the hardware progress and makes model inference increasingly costly. Model quantization is a promising approach to mitigate the widening gap between LLM size and hardware capacity. However, the existence of outliers, values with significant magnitudes, in LLMs makes existing quantization methods less effective. Prior outlier-aware quantization schemes adopt sparsity encoding techniques to separate outliers from normal values where the process requires global coordination (e.g., a global sparsity coordination list). This incurs complex encoding/decoding hardware logics and an extra orchestration controller for the computation between outlier and normal values. As such, it is not hardware-efficient and hence only achieves sub-optimal quantization benefits. We propose OliVe, an algorithm/architecture co-designed solution that adopts an outlier-victim pair (OVP) quantization and handles outlier values locally with low hardware overheads and high performance gains. The key insight of OliVe is that outliers are important while the normal values next to them are not. Thus those normal values (called victims) can be sacrificed to accommodate outliers. This enables a memory-aligned OVP encoding scheme, which can be efficiently integrated to the existing hardware accelerators like systolic array and tensor core. As a result, OliVe-based accelerator surpasses the existing outlier-aware accelerator, GOBO, by 4.5$\times$ speedup and 4.0$\times$ energy reduction, respectively, with a superior model accuracy. |
2405.07946 | Qiang Zou | Yaonaiming Zhao, Qiang Zou, Guoyue Luo, Jiayu Wu, Sifan Chen, Depeng
Gao, Minghao Xuan, Fuyu Wang | TPMS2STEP: error-controlled and C2 continuity-preserving translation of
TPMS models to STEP files based on constrained-PIA | null | null | null | null | cs.CG | http://creativecommons.org/publicdomain/zero/1.0/ | Triply periodic minimal surface (TPMS) is emerging as an important way of
designing microstructures. However, there has been limited use of commercial
CAD/CAM/CAE software packages for TPMS design and manufacturing. This is mainly
because TPMS is consistently described in the functional representation (F-rep)
format, while modern CAD/CAM/CAE tools are built upon the boundary
representation (B-rep) format. One possible solution to this gap is translating
TPMS to STEP, which is the standard data exchange format of CAD/CAM/CAE.
Following this direction, this paper proposes a new translation method with
error-controlling and $C^2$ continuity-preserving features. It is based on an
approximation error-driven TPMS sampling algorithm and a constrained-PIA
algorithm. The sampling algorithm controls the deviation between the original
and translated models. With it, an error bound of $2\epsilon$ on the deviation
can be ensured if two conditions called $\epsilon$-density and
$\epsilon$-approximation are satisfied. The constrained-PIA algorithm enforces
$C^2$ continuity constraints during TPMS approximation, and meanwhile attaining
high efficiency. A theoretical convergence proof of this algorithm is also
given. The effectiveness of the translation method has been demonstrated by a
series of examples and comparisons.
| [
{
"created": "Mon, 13 May 2024 17:22:44 GMT",
"version": "v1"
},
{
"created": "Fri, 24 May 2024 02:36:26 GMT",
"version": "v2"
}
] | 2024-05-27 | [
[
"Zhao",
"Yaonaiming",
""
],
[
"Zou",
"Qiang",
""
],
[
"Luo",
"Guoyue",
""
],
[
"Wu",
"Jiayu",
""
],
[
"Chen",
"Sifan",
""
],
[
"Gao",
"Depeng",
""
],
[
"Xuan",
"Minghao",
""
],
[
"Wang",
"Fuyu",
""
]
] | Triply periodic minimal surface (TPMS) is emerging as an important way of designing microstructures. However, there has been limited use of commercial CAD/CAM/CAE software packages for TPMS design and manufacturing. This is mainly because TPMS is consistently described in the functional representation (F-rep) format, while modern CAD/CAM/CAE tools are built upon the boundary representation (B-rep) format. One possible solution to this gap is translating TPMS to STEP, which is the standard data exchange format of CAD/CAM/CAE. Following this direction, this paper proposes a new translation method with error-controlling and $C^2$ continuity-preserving features. It is based on an approximation error-driven TPMS sampling algorithm and a constrained-PIA algorithm. The sampling algorithm controls the deviation between the original and translated models. With it, an error bound of $2\epsilon$ on the deviation can be ensured if two conditions called $\epsilon$-density and $\epsilon$-approximation are satisfied. The constrained-PIA algorithm enforces $C^2$ continuity constraints during TPMS approximation, and meanwhile attaining high efficiency. A theoretical convergence proof of this algorithm is also given. The effectiveness of the translation method has been demonstrated by a series of examples and comparisons. |
2210.01400 | Rui Yuan | Rui Yuan, Simon S. Du, Robert M. Gower, Alessandro Lazaric, Lin Xiao | Linear Convergence of Natural Policy Gradient Methods with Log-Linear
Policies | This version adds a table of comparison for the literature review.
The paper is published as a conference paper at ICLR 2023 | null | null | null | cs.LG cs.AI math.OC | http://creativecommons.org/licenses/by-nc-nd/4.0/ | We consider infinite-horizon discounted Markov decision processes and study
the convergence rates of the natural policy gradient (NPG) and the Q-NPG
methods with the log-linear policy class. Using the compatible function
approximation framework, both methods with log-linear policies can be written
as inexact versions of the policy mirror descent (PMD) method. We show that
both methods attain linear convergence rates and
$\tilde{\mathcal{O}}(1/\epsilon^2)$ sample complexities using a simple,
non-adaptive geometrically increasing step size, without resorting to entropy
or other strongly convex regularization. Lastly, as a byproduct, we obtain
sublinear convergence rates for both methods with arbitrary constant step size.
| [
{
"created": "Tue, 4 Oct 2022 06:17:52 GMT",
"version": "v1"
},
{
"created": "Mon, 21 Nov 2022 12:58:36 GMT",
"version": "v2"
},
{
"created": "Tue, 21 Feb 2023 14:48:00 GMT",
"version": "v3"
}
] | 2023-02-22 | [
[
"Yuan",
"Rui",
""
],
[
"Du",
"Simon S.",
""
],
[
"Gower",
"Robert M.",
""
],
[
"Lazaric",
"Alessandro",
""
],
[
"Xiao",
"Lin",
""
]
] | We consider infinite-horizon discounted Markov decision processes and study the convergence rates of the natural policy gradient (NPG) and the Q-NPG methods with the log-linear policy class. Using the compatible function approximation framework, both methods with log-linear policies can be written as inexact versions of the policy mirror descent (PMD) method. We show that both methods attain linear convergence rates and $\tilde{\mathcal{O}}(1/\epsilon^2)$ sample complexities using a simple, non-adaptive geometrically increasing step size, without resorting to entropy or other strongly convex regularization. Lastly, as a byproduct, we obtain sublinear convergence rates for both methods with arbitrary constant step size. |
2110.05305 | Pascal Koiran | Pascal Koiran and Subhayan Saha | Black Box Absolute Reconstruction for Sums of Powers of Linear Forms | null | null | null | null | cs.CC cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study the decomposition of multivariate polynomials as sums of powers of
linear forms. We give a randomized algorithm for the following problem: If a
homogeneous polynomial $f \in K[x_1 , . . . , x_n]$ (where $K \subseteq
\mathbb{C}$) of degree $d$ is given as a blackbox, decide whether it can be
written as a linear combination of $d$-th powers of linearly independent
complex linear forms. The main novel features of the algorithm are:
(1) For $d = 3$, we improve by a factor of $n$ on the running time from an
algorithm by Koiran and Skomra. The price to be paid for this improvement
though is that the algorithm now has two-sided error.
(2) For $d > 3$, we provide the first randomized blackbox algorithm for this
problem that runs in time polynomial in $n$ and $d$ (in an algebraic model
where only arithmetic operations and equality tests are allowed). Previous
algorithms for this problem as well as most of the existing reconstruction
algorithms for other classes appeal to a polynomial factorization subroutine.
This requires extraction of complex polynomial roots at unit cost and in
standard models such as the unit-cost RAM or the Turing machine this approach
does not yield polynomial time algorithms.
(3) For $d > 3$, when $f$ has rational coefficients, the running time of the
blackbox algorithm is polynomial in $n,d$ and the maximal bit size of any
coefficient of $f$. This yields the first algorithm for this problem over
$\mathbb{C}$ with polynomial running time in the bit model of computation.
| [
{
"created": "Mon, 11 Oct 2021 14:25:24 GMT",
"version": "v1"
}
] | 2021-10-12 | [
[
"Koiran",
"Pascal",
""
],
[
"Saha",
"Subhayan",
""
]
] | We study the decomposition of multivariate polynomials as sums of powers of linear forms. We give a randomized algorithm for the following problem: If a homogeneous polynomial $f \in K[x_1 , . . . , x_n]$ (where $K \subseteq \mathbb{C}$) of degree $d$ is given as a blackbox, decide whether it can be written as a linear combination of $d$-th powers of linearly independent complex linear forms. The main novel features of the algorithm are: (1) For $d = 3$, we improve by a factor of $n$ on the running time from an algorithm by Koiran and Skomra. The price to be paid for this improvement though is that the algorithm now has two-sided error. (2) For $d > 3$, we provide the first randomized blackbox algorithm for this problem that runs in time polynomial in $n$ and $d$ (in an algebraic model where only arithmetic operations and equality tests are allowed). Previous algorithms for this problem as well as most of the existing reconstruction algorithms for other classes appeal to a polynomial factorization subroutine. This requires extraction of complex polynomial roots at unit cost and in standard models such as the unit-cost RAM or the Turing machine this approach does not yield polynomial time algorithms. (3) For $d > 3$, when $f$ has rational coefficients, the running time of the blackbox algorithm is polynomial in $n,d$ and the maximal bit size of any coefficient of $f$. This yields the first algorithm for this problem over $\mathbb{C}$ with polynomial running time in the bit model of computation. |
2302.09813 | Juexiao Zhou | Juexiao Zhou, Haoyang Li, Xingyu Liao, Bin Zhang, Wenjia He, Zhongxiao
Li, Longxi Zhou, Xin Gao | Audit to Forget: A Unified Method to Revoke Patients' Private Data in
Intelligent Healthcare | null | null | 10.1038/s41467-023-41703-x | null | cs.LG cs.CR | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Revoking personal private data is one of the basic human rights, which has
already been sheltered by several privacy-preserving laws in many countries.
However, with the development of data science, machine learning and deep
learning techniques, this right is usually neglected or violated as more and
more patients' data are being collected and used for model training, especially
in intelligent healthcare, thus making intelligent healthcare a sector where
technology must meet the law, regulations, and privacy principles to ensure
that the innovation is for the common good. In order to secure patients' right
to be forgotten, we proposed a novel solution by using auditing to guide the
forgetting process, where auditing means determining whether a dataset has been
used to train the model and forgetting requires the information of a query
dataset to be forgotten from the target model. We unified these two tasks by
introducing a new approach called knowledge purification. To implement our
solution, we developed AFS, a unified open-source software, which is able to
evaluate and revoke patients' private data from pre-trained deep learning
models. We demonstrated the generality of AFS by applying it to four tasks on
different datasets with various data sizes and architectures of deep learning
networks. The software is publicly available at
\url{https://github.com/JoshuaChou2018/AFS}.
| [
{
"created": "Mon, 20 Feb 2023 07:29:22 GMT",
"version": "v1"
}
] | 2024-01-12 | [
[
"Zhou",
"Juexiao",
""
],
[
"Li",
"Haoyang",
""
],
[
"Liao",
"Xingyu",
""
],
[
"Zhang",
"Bin",
""
],
[
"He",
"Wenjia",
""
],
[
"Li",
"Zhongxiao",
""
],
[
"Zhou",
"Longxi",
""
],
[
"Gao",
"Xin",
""
]
] | Revoking personal private data is one of the basic human rights, which has already been sheltered by several privacy-preserving laws in many countries. However, with the development of data science, machine learning and deep learning techniques, this right is usually neglected or violated as more and more patients' data are being collected and used for model training, especially in intelligent healthcare, thus making intelligent healthcare a sector where technology must meet the law, regulations, and privacy principles to ensure that the innovation is for the common good. In order to secure patients' right to be forgotten, we proposed a novel solution by using auditing to guide the forgetting process, where auditing means determining whether a dataset has been used to train the model and forgetting requires the information of a query dataset to be forgotten from the target model. We unified these two tasks by introducing a new approach called knowledge purification. To implement our solution, we developed AFS, a unified open-source software, which is able to evaluate and revoke patients' private data from pre-trained deep learning models. We demonstrated the generality of AFS by applying it to four tasks on different datasets with various data sizes and architectures of deep learning networks. The software is publicly available at \url{https://github.com/JoshuaChou2018/AFS}. |
cs/0703030 | Konstantin Rybnikov | Konstantin Rybnikov | An Efficient Local Approach to Convexity Testing of Piecewise-Linear
Hypersurfaces | 3 figures | null | null | null | cs.CG | null | We show that a closed piecewise-linear hypersurface immersed in $R^n$ ($n\ge
3$) is the boundary of a convex body if and only if every point in the interior
of each $(n-3)$-face has a neighborhood that lies on the boundary of some
convex body; no assumptions about the hypersurface's topology are needed. We
derive this criterion from our generalization of Van Heijenoort's (1952)
theorem on locally convex hypersurfaces in $R^n$ to spherical spaces. We also
give an easy-to-implement convexity testing algorithm, which is based on our
criterion. For $R^3$ the number of arithmetic operations used by the algorithm
is at most linear in the number of vertices, while in general it is at most
linear in the number of incidences between the $(n-2)$-faces and $(n-3)$-faces.
When the dimension $n$ is not fixed and only ring arithmetic is allowed, the
algorithm still remains polynomial. Our method works in more general situations
than the convexity verification algorithms developed by Mehlhorn et al. (1996)
and Devillers et al. (1998) -- for example, our method does not require the
input surface to be orientable, nor it requires the input data to include
normal vectors to the facets that are oriented "in a coherent way". For $R^3$
the complexity of our algorithm is the same as that of previous algorithms; for
higher dimensions there seems to be no clear winner, but our approach is the
only one that easily handles inputs in which the facet normals are not known to
be coherently oriented or are not given at all. Furthermore, our method can be
extended to piecewise-polynomial surfaces of small degree.
| [
{
"created": "Wed, 7 Mar 2007 07:33:02 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Rybnikov",
"Konstantin",
""
]
] | We show that a closed piecewise-linear hypersurface immersed in $R^n$ ($n\ge 3$) is the boundary of a convex body if and only if every point in the interior of each $(n-3)$-face has a neighborhood that lies on the boundary of some convex body; no assumptions about the hypersurface's topology are needed. We derive this criterion from our generalization of Van Heijenoort's (1952) theorem on locally convex hypersurfaces in $R^n$ to spherical spaces. We also give an easy-to-implement convexity testing algorithm, which is based on our criterion. For $R^3$ the number of arithmetic operations used by the algorithm is at most linear in the number of vertices, while in general it is at most linear in the number of incidences between the $(n-2)$-faces and $(n-3)$-faces. When the dimension $n$ is not fixed and only ring arithmetic is allowed, the algorithm still remains polynomial. Our method works in more general situations than the convexity verification algorithms developed by Mehlhorn et al. (1996) and Devillers et al. (1998) -- for example, our method does not require the input surface to be orientable, nor it requires the input data to include normal vectors to the facets that are oriented "in a coherent way". For $R^3$ the complexity of our algorithm is the same as that of previous algorithms; for higher dimensions there seems to be no clear winner, but our approach is the only one that easily handles inputs in which the facet normals are not known to be coherently oriented or are not given at all. Furthermore, our method can be extended to piecewise-polynomial surfaces of small degree. |
2207.09159 | Ahmed El Kerim | Ahmed El Kerim and Pierre Gosselet and Frederic Magoules | Couplage Global-Local en asynchrone pour des probl\`emes lin\'eaires | in French language | null | null | null | cs.DC | http://creativecommons.org/licenses/by-nc-nd/4.0/ | An asynchronous parallel version of the non-intrusive global-local coupling
is implemented. The case of many patches, including those covering the entire
structure, is studied. The asynchronism limits the dependency on
communications, failures, and load imbalance. We detail the method and
illustrate its performance in an academic case.
| [
{
"created": "Tue, 19 Jul 2022 09:59:12 GMT",
"version": "v1"
}
] | 2022-07-20 | [
[
"Kerim",
"Ahmed El",
""
],
[
"Gosselet",
"Pierre",
""
],
[
"Magoules",
"Frederic",
""
]
] | An asynchronous parallel version of the non-intrusive global-local coupling is implemented. The case of many patches, including those covering the entire structure, is studied. The asynchronism limits the dependency on communications, failures, and load imbalance. We detail the method and illustrate its performance in an academic case. |
1811.08225 | Danilo Vasconcellos Vargas | Danilo Vasconcellos Vargas and Hirotaka Takano and Junichi Murata | Self Organizing Classifiers: First Steps in Structured Evolutionary
Machine Learning | null | Evolutionary Intelligence 6 (2), 57-72 (2013) | null | null | cs.NE cs.AI cs.LG cs.MA | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Learning classifier systems (LCSs) are evolutionary machine learning
algorithms, flexible enough to be applied to reinforcement, supervised and
unsupervised learning problems with good performance. Recently, self organizing
classifiers were proposed which are similar to LCSs but have the advantage that
in its structured population no balance between niching and fitness pressure is
necessary. However, more tests and analysis are required to verify its
benefits. Here, a variation of the first algorithm is proposed which uses a
parameterless self organizing map (SOM). This algorithm is applied in
challenging problems such as big, noisy as well as dynamically changing
continuous input-action mazes (growing and compressing mazes are included) with
good performance. Moreover, a genetic operator is proposed which utilizes the
topological information of the SOM's population structure, improving the
results. Thus, the first steps in structured evolutionary machine learning are
shown, nonetheless, the problems faced are more difficult than the state-of-art
continuous input-action multi-step ones.
| [
{
"created": "Tue, 20 Nov 2018 13:00:51 GMT",
"version": "v1"
}
] | 2018-11-21 | [
[
"Vargas",
"Danilo Vasconcellos",
""
],
[
"Takano",
"Hirotaka",
""
],
[
"Murata",
"Junichi",
""
]
] | Learning classifier systems (LCSs) are evolutionary machine learning algorithms, flexible enough to be applied to reinforcement, supervised and unsupervised learning problems with good performance. Recently, self organizing classifiers were proposed which are similar to LCSs but have the advantage that in its structured population no balance between niching and fitness pressure is necessary. However, more tests and analysis are required to verify its benefits. Here, a variation of the first algorithm is proposed which uses a parameterless self organizing map (SOM). This algorithm is applied in challenging problems such as big, noisy as well as dynamically changing continuous input-action mazes (growing and compressing mazes are included) with good performance. Moreover, a genetic operator is proposed which utilizes the topological information of the SOM's population structure, improving the results. Thus, the first steps in structured evolutionary machine learning are shown, nonetheless, the problems faced are more difficult than the state-of-art continuous input-action multi-step ones. |
1210.6719 | Jun Muramatsu | Jun Muramatsu and Shigeki Miyake | Construction of Multiple Access Channel Codes Based on Hash Property | This paper has been presented in part at Proc. 2011 IEEE Internal
Symposium on Information Theory and submitted to IEEE Transactions on
Information Theory. 39 pages | null | null | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The aim of this paper is to introduce the construction of codes for a general
discrete stationary memoryless multiple access channel based on the the notion
of the hash property. Since an ensemble of sparse matrices has a hash property,
we can use sparse matrices for code construction. Our approach has a potential
advantage compared to the conventional random coding because it is expected
that we can use some approximation algorithms by using the sparse structure of
codes.
| [
{
"created": "Thu, 25 Oct 2012 01:34:37 GMT",
"version": "v1"
}
] | 2012-10-26 | [
[
"Muramatsu",
"Jun",
""
],
[
"Miyake",
"Shigeki",
""
]
] | The aim of this paper is to introduce the construction of codes for a general discrete stationary memoryless multiple access channel based on the the notion of the hash property. Since an ensemble of sparse matrices has a hash property, we can use sparse matrices for code construction. Our approach has a potential advantage compared to the conventional random coding because it is expected that we can use some approximation algorithms by using the sparse structure of codes. |
2201.07048 | Tianyu Fang | Tianyu Fang, Yijie Mao, Shanpu Shen, Zhencai Zhu, Bruno Clerckx | Fully Connected Reconfigurable Intelligent Surface Aided Rate-Splitting
Multiple Access for Multi-User Multi-Antenna Transmission | 6 pages, 5figures, conference | null | null | null | cs.IT math.IT | http://creativecommons.org/licenses/by/4.0/ | Rate-splitting multiple access (RSMA) has been recognized as a promising and
powerful multiple access (MA) scheme, non-orthogonal transmission framework and
interference management strategy for 6G. Inspired by the appealing spectral
efficiency gain achieved by RSMA over conventional MA schemes in multi-user
multi-antenna transmission, in this paper we introduce RSMA to reconfigurable
intelligent surface (RIS)-aided multiple-input single-out (MISO) broadcast
channel (BC). To further enhance the spectral efficiency, a more generalized
RIS architecture called fully connected RIS is considered. By jointly
optimizing the scattering matrix of the fully connected RIS and the transmit
beamformers to maximize the sum-rate, we show that the proposed fully connected
RIS aided RSMA transmission scheme significantly improves the spectral
efficiency compared with the conventional single connected RIS schemes and the
schemes without RIS. It acts as a new benchmark for linearly precoded
multi-user multi-antenna networks.
| [
{
"created": "Tue, 18 Jan 2022 15:20:02 GMT",
"version": "v1"
},
{
"created": "Thu, 10 Mar 2022 05:19:47 GMT",
"version": "v2"
}
] | 2022-03-11 | [
[
"Fang",
"Tianyu",
""
],
[
"Mao",
"Yijie",
""
],
[
"Shen",
"Shanpu",
""
],
[
"Zhu",
"Zhencai",
""
],
[
"Clerckx",
"Bruno",
""
]
] | Rate-splitting multiple access (RSMA) has been recognized as a promising and powerful multiple access (MA) scheme, non-orthogonal transmission framework and interference management strategy for 6G. Inspired by the appealing spectral efficiency gain achieved by RSMA over conventional MA schemes in multi-user multi-antenna transmission, in this paper we introduce RSMA to reconfigurable intelligent surface (RIS)-aided multiple-input single-out (MISO) broadcast channel (BC). To further enhance the spectral efficiency, a more generalized RIS architecture called fully connected RIS is considered. By jointly optimizing the scattering matrix of the fully connected RIS and the transmit beamformers to maximize the sum-rate, we show that the proposed fully connected RIS aided RSMA transmission scheme significantly improves the spectral efficiency compared with the conventional single connected RIS schemes and the schemes without RIS. It acts as a new benchmark for linearly precoded multi-user multi-antenna networks. |
1510.02181 | Alejandro Erickson | Alejandro Erickson and and Iain A. Stewart and Javier Navaridas and
Abbas E. Kiasari | The Stellar Transformation: From Interconnection Networks to Datacenter
Networks | Submitted to a journal | null | null | null | cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The first dual-port server-centric datacenter network, FiConn, was introduced
in 2009 and there are several others now in existence; however, the pool of
topologies to choose from remains small. We propose a new generic construction,
the stellar transformation, that dramatically increases the size of this pool
by facilitating the transformation of well-studied topologies from
interconnection networks, along with their networking properties and routing
algorithms, into viable dual-port server-centric datacenter network topologies.
We demonstrate that under our transformation, numerous interconnection networks
yield datacenter network topologies with potentially good, and easily
computable, baseline properties. We instantiate our construction so as to apply
it to generalized hypercubes and obtain the datacenter networks GQ*. Our
construction automatically yields routing algorithms for GQ* and we empirically
compare GQ* (and its routing algorithms) with the established datacenter
networks FiConn and DPillar (and their routing algorithms); this comparison is
with respect to network throughput, latency, load balancing, fault-tolerance,
and cost to build, and is with regard to all-to-all, many all-to-all,
butterfly, and random traffic patterns. We find that GQ* outperforms both
FiConn and DPillar (sometimes significantly so) and that there is substantial
scope for our stellar transformation to yield new dual-port server-centric
datacenter networks that are a considerable improvement on existing ones.
| [
{
"created": "Thu, 8 Oct 2015 02:09:29 GMT",
"version": "v1"
},
{
"created": "Mon, 30 Nov 2015 01:16:36 GMT",
"version": "v2"
},
{
"created": "Wed, 17 Feb 2016 01:01:34 GMT",
"version": "v3"
},
{
"created": "Mon, 27 Jun 2016 21:41:20 GMT",
"version": "v4"
}
] | 2016-06-29 | [
[
"Erickson",
"Alejandro",
""
],
[
"Stewart",
"and Iain A.",
""
],
[
"Navaridas",
"Javier",
""
],
[
"Kiasari",
"Abbas E.",
""
]
] | The first dual-port server-centric datacenter network, FiConn, was introduced in 2009 and there are several others now in existence; however, the pool of topologies to choose from remains small. We propose a new generic construction, the stellar transformation, that dramatically increases the size of this pool by facilitating the transformation of well-studied topologies from interconnection networks, along with their networking properties and routing algorithms, into viable dual-port server-centric datacenter network topologies. We demonstrate that under our transformation, numerous interconnection networks yield datacenter network topologies with potentially good, and easily computable, baseline properties. We instantiate our construction so as to apply it to generalized hypercubes and obtain the datacenter networks GQ*. Our construction automatically yields routing algorithms for GQ* and we empirically compare GQ* (and its routing algorithms) with the established datacenter networks FiConn and DPillar (and their routing algorithms); this comparison is with respect to network throughput, latency, load balancing, fault-tolerance, and cost to build, and is with regard to all-to-all, many all-to-all, butterfly, and random traffic patterns. We find that GQ* outperforms both FiConn and DPillar (sometimes significantly so) and that there is substantial scope for our stellar transformation to yield new dual-port server-centric datacenter networks that are a considerable improvement on existing ones. |
2303.07710 | Th\'eo Pierron | Nicolas Bousquet, Valentin Gledel, Jonathan Narboni, Th\'eo Pierron | A note on the flip distance between non-crossing spanning trees | null | null | null | null | cs.CG cs.DM math.CO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider spanning trees of $n$ points in convex position whose edges are
pairwise non-crossing. Applying a flip to such a tree consists in adding an
edge and removing another so that the result is still a non-crossing spanning
tree. Given two trees, we investigate the minimum number of flips required to
transform one into the other. The naive $2n-\Omega(1)$ upper bound stood for 25
years until a recent breakthrough from Aichholzer et al. yielding a
$2n-\Omega(\log n)$ bound. We improve their result with a $2n-\Omega(\sqrt{n})$
upper bound, and we strengthen and shorten the proofs of several of their
results.
| [
{
"created": "Tue, 14 Mar 2023 08:52:36 GMT",
"version": "v1"
}
] | 2023-03-15 | [
[
"Bousquet",
"Nicolas",
""
],
[
"Gledel",
"Valentin",
""
],
[
"Narboni",
"Jonathan",
""
],
[
"Pierron",
"Théo",
""
]
] | We consider spanning trees of $n$ points in convex position whose edges are pairwise non-crossing. Applying a flip to such a tree consists in adding an edge and removing another so that the result is still a non-crossing spanning tree. Given two trees, we investigate the minimum number of flips required to transform one into the other. The naive $2n-\Omega(1)$ upper bound stood for 25 years until a recent breakthrough from Aichholzer et al. yielding a $2n-\Omega(\log n)$ bound. We improve their result with a $2n-\Omega(\sqrt{n})$ upper bound, and we strengthen and shorten the proofs of several of their results. |
1704.05091 | Pedro Saleiro | Pedro Saleiro, Eduarda Mendes Rodrigues, Carlos Soares, Eug\'enio
Oliveira | FEUP at SemEval-2017 Task 5: Predicting Sentiment Polarity and Intensity
with Financial Word Embeddings | null | null | null | null | cs.CL cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents the approach developed at the Faculty of Engineering of
University of Porto, to participate in SemEval 2017, Task 5: Fine-grained
Sentiment Analysis on Financial Microblogs and News. The task consisted in
predicting a real continuous variable from -1.0 to +1.0 representing the
polarity and intensity of sentiment concerning companies/stocks mentioned in
short texts. We modeled the task as a regression analysis problem and combined
traditional techniques such as pre-processing short texts, bag-of-words
representations and lexical-based features with enhanced financial specific
bag-of-embeddings. We used an external collection of tweets and news headlines
mentioning companies/stocks from S\&P 500 to create financial word embeddings
which are able to capture domain-specific syntactic and semantic similarities.
The resulting approach obtained a cosine similarity score of 0.69 in sub-task
5.1 - Microblogs and 0.68 in sub-task 5.2 - News Headlines.
| [
{
"created": "Mon, 17 Apr 2017 18:48:00 GMT",
"version": "v1"
}
] | 2017-04-19 | [
[
"Saleiro",
"Pedro",
""
],
[
"Rodrigues",
"Eduarda Mendes",
""
],
[
"Soares",
"Carlos",
""
],
[
"Oliveira",
"Eugénio",
""
]
] | This paper presents the approach developed at the Faculty of Engineering of University of Porto, to participate in SemEval 2017, Task 5: Fine-grained Sentiment Analysis on Financial Microblogs and News. The task consisted in predicting a real continuous variable from -1.0 to +1.0 representing the polarity and intensity of sentiment concerning companies/stocks mentioned in short texts. We modeled the task as a regression analysis problem and combined traditional techniques such as pre-processing short texts, bag-of-words representations and lexical-based features with enhanced financial specific bag-of-embeddings. We used an external collection of tweets and news headlines mentioning companies/stocks from S\&P 500 to create financial word embeddings which are able to capture domain-specific syntactic and semantic similarities. The resulting approach obtained a cosine similarity score of 0.69 in sub-task 5.1 - Microblogs and 0.68 in sub-task 5.2 - News Headlines. |
2402.19076 | Moritz Blum | Gennaro Nolano, Moritz Blum, Basil Ell, Philipp Cimiano | Pointing out the Shortcomings of Relation Extraction Models with
Semantically Motivated Adversarials | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | In recent years, large language models have achieved state-of-the-art
performance across various NLP tasks. However, investigations have shown that
these models tend to rely on shortcut features, leading to inaccurate
predictions and causing the models to be unreliable at generalization to
out-of-distribution (OOD) samples. For instance, in the context of relation
extraction (RE), we would expect a model to identify the same relation
independently of the entities involved in it. For example, consider the
sentence "Leonardo da Vinci painted the Mona Lisa" expressing the
created(Leonardo_da_Vinci, Mona_Lisa) relation. If we substiute "Leonardo da
Vinci" with "Barack Obama", then the sentence still expresses the created
relation. A robust model is supposed to detect the same relation in both cases.
In this work, we describe several semantically-motivated strategies to generate
adversarial examples by replacing entity mentions and investigate how
state-of-the-art RE models perform under pressure. Our analyses show that the
performance of these models significantly deteriorates on the modified datasets
(avg. of -48.5% in F1), which indicates that these models rely to a great
extent on shortcuts, such as surface forms (or patterns therein) of entities,
without making full use of the information present in the sentences.
| [
{
"created": "Thu, 29 Feb 2024 12:01:46 GMT",
"version": "v1"
}
] | 2024-03-01 | [
[
"Nolano",
"Gennaro",
""
],
[
"Blum",
"Moritz",
""
],
[
"Ell",
"Basil",
""
],
[
"Cimiano",
"Philipp",
""
]
] | In recent years, large language models have achieved state-of-the-art performance across various NLP tasks. However, investigations have shown that these models tend to rely on shortcut features, leading to inaccurate predictions and causing the models to be unreliable at generalization to out-of-distribution (OOD) samples. For instance, in the context of relation extraction (RE), we would expect a model to identify the same relation independently of the entities involved in it. For example, consider the sentence "Leonardo da Vinci painted the Mona Lisa" expressing the created(Leonardo_da_Vinci, Mona_Lisa) relation. If we substiute "Leonardo da Vinci" with "Barack Obama", then the sentence still expresses the created relation. A robust model is supposed to detect the same relation in both cases. In this work, we describe several semantically-motivated strategies to generate adversarial examples by replacing entity mentions and investigate how state-of-the-art RE models perform under pressure. Our analyses show that the performance of these models significantly deteriorates on the modified datasets (avg. of -48.5% in F1), which indicates that these models rely to a great extent on shortcuts, such as surface forms (or patterns therein) of entities, without making full use of the information present in the sentences. |
2307.08074 | Longyue Wang | Longyue Wang, Zefeng Du, Donghuai Liu, Deng Cai, Dian Yu, Haiyun
Jiang, Yan Wang, Leyang Cui, Shuming Shi, Zhaopeng Tu | Disco-Bench: A Discourse-Aware Evaluation Benchmark for Language
Modelling | Zhaopeng Tu is the corresponding author | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | Modeling discourse -- the linguistic phenomena that go beyond individual
sentences, is a fundamental yet challenging aspect of natural language
processing (NLP). However, existing evaluation benchmarks primarily focus on
the evaluation of inter-sentence properties and overlook critical discourse
phenomena that cross sentences. To bridge the gap, we propose Disco-Bench, a
benchmark that can evaluate intra-sentence discourse properties across a
diverse set of NLP tasks, covering understanding, translation, and generation.
Disco-Bench consists of 9 document-level testsets in the literature domain,
which contain rich discourse phenomena (e.g. cohesion and coherence) in Chinese
and/or English. For linguistic analysis, we also design a diagnostic test suite
that can examine whether the target models learn discourse knowledge. We
totally evaluate 20 general-, in-domain and commercial models based on
Transformer, advanced pretraining architectures and large language models
(LLMs). Our results show (1) the challenge and necessity of our evaluation
benchmark; (2) fine-grained pretraining based on literary document-level
training data consistently improves the modeling of discourse information. We
will release the datasets, pretrained models, and leaderboard, which we hope
can significantly facilitate research in this field:
https://github.com/longyuewangdcu/Disco-Bench.
| [
{
"created": "Sun, 16 Jul 2023 15:18:25 GMT",
"version": "v1"
},
{
"created": "Sat, 22 Jul 2023 00:11:24 GMT",
"version": "v2"
}
] | 2023-07-25 | [
[
"Wang",
"Longyue",
""
],
[
"Du",
"Zefeng",
""
],
[
"Liu",
"Donghuai",
""
],
[
"Cai",
"Deng",
""
],
[
"Yu",
"Dian",
""
],
[
"Jiang",
"Haiyun",
""
],
[
"Wang",
"Yan",
""
],
[
"Cui",
"Leyang",
""
],
[
"Shi",
"Shuming",
""
],
[
"Tu",
"Zhaopeng",
""
]
] | Modeling discourse -- the linguistic phenomena that go beyond individual sentences, is a fundamental yet challenging aspect of natural language processing (NLP). However, existing evaluation benchmarks primarily focus on the evaluation of inter-sentence properties and overlook critical discourse phenomena that cross sentences. To bridge the gap, we propose Disco-Bench, a benchmark that can evaluate intra-sentence discourse properties across a diverse set of NLP tasks, covering understanding, translation, and generation. Disco-Bench consists of 9 document-level testsets in the literature domain, which contain rich discourse phenomena (e.g. cohesion and coherence) in Chinese and/or English. For linguistic analysis, we also design a diagnostic test suite that can examine whether the target models learn discourse knowledge. We totally evaluate 20 general-, in-domain and commercial models based on Transformer, advanced pretraining architectures and large language models (LLMs). Our results show (1) the challenge and necessity of our evaluation benchmark; (2) fine-grained pretraining based on literary document-level training data consistently improves the modeling of discourse information. We will release the datasets, pretrained models, and leaderboard, which we hope can significantly facilitate research in this field: https://github.com/longyuewangdcu/Disco-Bench. |
2212.08241 | Inayat Ali | Sonia Sabir, Inayat Ali, Eraj Khan | H-LPS: a hybrid approach for user's location privacy in location-based
services | null | null | null | null | cs.CR | http://creativecommons.org/publicdomain/zero/1.0/ | Applications providing location-based services (LBS) have gained much
attention and importance with the notion of the internet of things (IoT). Users
are utilizing LBS by providing their location information to third-party
service providers. However, location data is very sensitive that can reveal
user's private life to adversaries. The passive and pervasive data collection
in IoT upsurges serious issues of location privacy. Privacy-preserving
location-based services are a hot research topic. Many anonymization and
obfuscation techniques have been proposed to overcome location privacy issues.
In this paper, we have proposed a hybrid location privacy scheme (H-LPS), a
hybrid scheme mainly based on obfuscation and collaboration for protecting
users' location privacy while using location-based services. Obfuscation
naturally degrades the quality of service but provides more privacy as compared
to anonymization. Our proposed scheme, H-LPS, provides a very high-level of
privacy yet provides good accuracy for most of the users. The privacy level and
service accuracy of H-LPS are compared with state-of-the-art location privacy
schemes and it is shown that H-LPS could be a candidate solution for preserving
user location privacy in location-based services.
| [
{
"created": "Fri, 16 Dec 2022 02:16:29 GMT",
"version": "v1"
}
] | 2022-12-19 | [
[
"Sabir",
"Sonia",
""
],
[
"Ali",
"Inayat",
""
],
[
"Khan",
"Eraj",
""
]
] | Applications providing location-based services (LBS) have gained much attention and importance with the notion of the internet of things (IoT). Users are utilizing LBS by providing their location information to third-party service providers. However, location data is very sensitive that can reveal user's private life to adversaries. The passive and pervasive data collection in IoT upsurges serious issues of location privacy. Privacy-preserving location-based services are a hot research topic. Many anonymization and obfuscation techniques have been proposed to overcome location privacy issues. In this paper, we have proposed a hybrid location privacy scheme (H-LPS), a hybrid scheme mainly based on obfuscation and collaboration for protecting users' location privacy while using location-based services. Obfuscation naturally degrades the quality of service but provides more privacy as compared to anonymization. Our proposed scheme, H-LPS, provides a very high-level of privacy yet provides good accuracy for most of the users. The privacy level and service accuracy of H-LPS are compared with state-of-the-art location privacy schemes and it is shown that H-LPS could be a candidate solution for preserving user location privacy in location-based services. |
2311.09217 | Yinghao Xu | Yinghao Xu, Hao Tan, Fujun Luan, Sai Bi, Peng Wang, Jiahao Li, Zifan
Shi, Kalyan Sunkavalli, Gordon Wetzstein, Zexiang Xu, Kai Zhang | DMV3D: Denoising Multi-View Diffusion using 3D Large Reconstruction
Model | Project Page: https://justimyhxu.github.io/projects/dmv3d/ | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose \textbf{DMV3D}, a novel 3D generation approach that uses a
transformer-based 3D large reconstruction model to denoise multi-view
diffusion. Our reconstruction model incorporates a triplane NeRF representation
and can denoise noisy multi-view images via NeRF reconstruction and rendering,
achieving single-stage 3D generation in $\sim$30s on single A100 GPU. We train
\textbf{DMV3D} on large-scale multi-view image datasets of highly diverse
objects using only image reconstruction losses, without accessing 3D assets. We
demonstrate state-of-the-art results for the single-image reconstruction
problem where probabilistic modeling of unseen object parts is required for
generating diverse reconstructions with sharp textures. We also show
high-quality text-to-3D generation results outperforming previous 3D diffusion
models. Our project website is at: https://justimyhxu.github.io/projects/dmv3d/ .
| [
{
"created": "Wed, 15 Nov 2023 18:58:41 GMT",
"version": "v1"
}
] | 2023-11-16 | [
[
"Xu",
"Yinghao",
""
],
[
"Tan",
"Hao",
""
],
[
"Luan",
"Fujun",
""
],
[
"Bi",
"Sai",
""
],
[
"Wang",
"Peng",
""
],
[
"Li",
"Jiahao",
""
],
[
"Shi",
"Zifan",
""
],
[
"Sunkavalli",
"Kalyan",
""
],
[
"Wetzstein",
"Gordon",
""
],
[
"Xu",
"Zexiang",
""
],
[
"Zhang",
"Kai",
""
]
] | We propose \textbf{DMV3D}, a novel 3D generation approach that uses a transformer-based 3D large reconstruction model to denoise multi-view diffusion. Our reconstruction model incorporates a triplane NeRF representation and can denoise noisy multi-view images via NeRF reconstruction and rendering, achieving single-stage 3D generation in $\sim$30s on single A100 GPU. We train \textbf{DMV3D} on large-scale multi-view image datasets of highly diverse objects using only image reconstruction losses, without accessing 3D assets. We demonstrate state-of-the-art results for the single-image reconstruction problem where probabilistic modeling of unseen object parts is required for generating diverse reconstructions with sharp textures. We also show high-quality text-to-3D generation results outperforming previous 3D diffusion models. Our project website is at: https://justimyhxu.github.io/projects/dmv3d/ . |
2402.11217 | Wenting Chen | Wenxuan Wang, Yihang Su, Jingyuan Huan, Jie Liu, Wenting Chen, Yudi
Zhang, Cheng-Yi Li, Kao-Jung Chang, Xiaohan Xin, Linlin Shen, Michael R. Lyu | Asclepius: A Spectrum Evaluation Benchmark for Medical Multi-Modal Large
Language Models | 20 pages, 15 figures | null | null | null | cs.CL cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | The significant breakthroughs of Medical Multi-Modal Large Language Models
(Med-MLLMs) renovate modern healthcare with robust information synthesis and
medical decision support. However, these models are often evaluated on
benchmarks that are unsuitable for the Med-MLLMs due to the intricate nature of
the real-world diagnostic frameworks, which encompass diverse medical
specialties and involve complex clinical decisions. Moreover, these benchmarks
are susceptible to data leakage, since Med-MLLMs are trained on large
assemblies of publicly available data. Thus, an isolated and clinically
representative benchmark is highly desirable for credible Med-MLLMs evaluation.
To this end, we introduce Asclepius, a novel Med-MLLM benchmark that rigorously
and comprehensively assesses model capability in terms of: distinct medical
specialties (cardiovascular, gastroenterology, etc.) and different diagnostic
capacities (perception, disease analysis, etc.). Grounded in 3 proposed core
principles, Asclepius ensures a comprehensive evaluation by encompassing 15
medical specialties, stratifying into 3 main categories and 8 sub-categories of
clinical tasks, and exempting from train-validate contamination. We further
provide an in-depth analysis of 6 Med-MLLMs and compare them with 5 human
specialists, providing insights into their competencies and limitations in
various medical contexts. Our work not only advances the understanding of
Med-MLLMs' capabilities but also sets a precedent for future evaluations and
the safe deployment of these models in clinical environments. We launch and
maintain a leaderboard for community assessment of Med-MLLM capabilities
(https://asclepius-med.github.io/).
| [
{
"created": "Sat, 17 Feb 2024 08:04:23 GMT",
"version": "v1"
}
] | 2024-02-20 | [
[
"Wang",
"Wenxuan",
""
],
[
"Su",
"Yihang",
""
],
[
"Huan",
"Jingyuan",
""
],
[
"Liu",
"Jie",
""
],
[
"Chen",
"Wenting",
""
],
[
"Zhang",
"Yudi",
""
],
[
"Li",
"Cheng-Yi",
""
],
[
"Chang",
"Kao-Jung",
""
],
[
"Xin",
"Xiaohan",
""
],
[
"Shen",
"Linlin",
""
],
[
"Lyu",
"Michael R.",
""
]
] | The significant breakthroughs of Medical Multi-Modal Large Language Models (Med-MLLMs) renovate modern healthcare with robust information synthesis and medical decision support. However, these models are often evaluated on benchmarks that are unsuitable for the Med-MLLMs due to the intricate nature of the real-world diagnostic frameworks, which encompass diverse medical specialties and involve complex clinical decisions. Moreover, these benchmarks are susceptible to data leakage, since Med-MLLMs are trained on large assemblies of publicly available data. Thus, an isolated and clinically representative benchmark is highly desirable for credible Med-MLLMs evaluation. To this end, we introduce Asclepius, a novel Med-MLLM benchmark that rigorously and comprehensively assesses model capability in terms of: distinct medical specialties (cardiovascular, gastroenterology, etc.) and different diagnostic capacities (perception, disease analysis, etc.). Grounded in 3 proposed core principles, Asclepius ensures a comprehensive evaluation by encompassing 15 medical specialties, stratifying into 3 main categories and 8 sub-categories of clinical tasks, and exempting from train-validate contamination. We further provide an in-depth analysis of 6 Med-MLLMs and compare them with 5 human specialists, providing insights into their competencies and limitations in various medical contexts. Our work not only advances the understanding of Med-MLLMs' capabilities but also sets a precedent for future evaluations and the safe deployment of these models in clinical environments. We launch and maintain a leaderboard for community assessment of Med-MLLM capabilities (https://asclepius-med.github.io/). |
2109.06710 | Junlin Zhao | Mehmet Emre Ozfatura, Junlin Zhao, and Deniz G\"und\"uz | Fast Federated Edge Learning with Overlapped Communication and
Computation and Channel-Aware Fair Client Scheduling | Accepted in IEEE SPAWC 2021 | null | null | null | cs.IT cs.LG math.IT | http://creativecommons.org/licenses/by/4.0/ | We consider federated edge learning (FEEL) over wireless fading channels
taking into account the downlink and uplink channel latencies, and the random
computation delays at the clients. We speed up the training process by
overlapping the communication with computation. With fountain coded
transmission of the global model update, clients receive the global model
asynchronously, and start performing local computations right away. Then, we
propose a dynamic client scheduling policy, called MRTP, for uploading local
model updates to the parameter server (PS), which, at any time, schedules the
client with the minimum remaining upload time. However, MRTP can lead to biased
participation of clients in the update process, resulting in performance
degradation in non-iid data scenarios. To overcome this, we propose two
alternative schemes with fairness considerations, termed as age-aware MRTP
(A-MRTP), and opportunistically fair MRTP (OF-MRTP). In A-MRTP, the remaining
clients are scheduled according to the ratio between their remaining
transmission time and the update age, while in OF-MRTP, the selection mechanism
utilizes the long term average channel rate of the clients to further reduce
the latency while ensuring fair participation of the clients. It is shown
through numerical simulations that OF-MRTP provides significant reduction in
latency without sacrificing test accuracy.
| [
{
"created": "Tue, 14 Sep 2021 14:16:01 GMT",
"version": "v1"
}
] | 2021-09-15 | [
[
"Ozfatura",
"Mehmet Emre",
""
],
[
"Zhao",
"Junlin",
""
],
[
"Gündüz",
"Deniz",
""
]
] | We consider federated edge learning (FEEL) over wireless fading channels taking into account the downlink and uplink channel latencies, and the random computation delays at the clients. We speed up the training process by overlapping the communication with computation. With fountain coded transmission of the global model update, clients receive the global model asynchronously, and start performing local computations right away. Then, we propose a dynamic client scheduling policy, called MRTP, for uploading local model updates to the parameter server (PS), which, at any time, schedules the client with the minimum remaining upload time. However, MRTP can lead to biased participation of clients in the update process, resulting in performance degradation in non-iid data scenarios. To overcome this, we propose two alternative schemes with fairness considerations, termed as age-aware MRTP (A-MRTP), and opportunistically fair MRTP (OF-MRTP). In A-MRTP, the remaining clients are scheduled according to the ratio between their remaining transmission time and the update age, while in OF-MRTP, the selection mechanism utilizes the long term average channel rate of the clients to further reduce the latency while ensuring fair participation of the clients. It is shown through numerical simulations that OF-MRTP provides significant reduction in latency without sacrificing test accuracy. |
2407.00265 | Shayan Khorassany | Shayan Khorassany, Eric B. Dew, Mohammad Rahim Sobhani, Roger J. Zemp | Radiation Impedance of Rectangular CMUTs | 18 pages, 10 figures, submitted to Sensors | null | null | null | cs.CE | http://creativecommons.org/licenses/by/4.0/ | Recently, capacitive micromachined ultrasound transducers (CMUTs) with long
rectangular membranes have demonstrated performance advantages over
conventional piezoelectric transducers; however, modeling these CMUT geometries
has been limited to computationally burdensome numerical methods. Improved fast
modeling methods such as equivalent circuit models could help achieve designs
with even better performance. The primary obstacle in developing such methods
is the lack of tractable methods for computing the radiation impedance of
clamped rectangular radiators. This paper presents a method which approximates
the velocity profile using a polynomial shape model to rapidly and accurately
estimate radiation impedance. The validity of the approximate velocity profile
and corresponding radiation impedance calculation was assessed using finite
element simulations for a variety of membrane aspect ratios and bias voltages.
Our method was evaluated for rectangular radiators with width:length ratios
from 1:1 up to 1:25. At all aspect ratios, the radiation resistance was closely
modeled. However, when calculating the radiation reactance, our initial
approach was only accurate for low aspect ratios. This motivated us to consider
an alternative shape model for high aspect ratios, which was more accurate when
compared with FEM. To facilitate development of future rectangular CMUTs, we
provide a MATLAB script which quickly calculates radiation impedance using both
methods.
| [
{
"created": "Fri, 28 Jun 2024 23:55:28 GMT",
"version": "v1"
}
] | 2024-07-02 | [
[
"Khorassany",
"Shayan",
""
],
[
"Dew",
"Eric B.",
""
],
[
"Sobhani",
"Mohammad Rahim",
""
],
[
"Zemp",
"Roger J.",
""
]
] | Recently, capacitive micromachined ultrasound transducers (CMUTs) with long rectangular membranes have demonstrated performance advantages over conventional piezoelectric transducers; however, modeling these CMUT geometries has been limited to computationally burdensome numerical methods. Improved fast modeling methods such as equivalent circuit models could help achieve designs with even better performance. The primary obstacle in developing such methods is the lack of tractable methods for computing the radiation impedance of clamped rectangular radiators. This paper presents a method which approximates the velocity profile using a polynomial shape model to rapidly and accurately estimate radiation impedance. The validity of the approximate velocity profile and corresponding radiation impedance calculation was assessed using finite element simulations for a variety of membrane aspect ratios and bias voltages. Our method was evaluated for rectangular radiators with width:length ratios from 1:1 up to 1:25. At all aspect ratios, the radiation resistance was closely modeled. However, when calculating the radiation reactance, our initial approach was only accurate for low aspect ratios. This motivated us to consider an alternative shape model for high aspect ratios, which was more accurate when compared with FEM. To facilitate development of future rectangular CMUTs, we provide a MATLAB script which quickly calculates radiation impedance using both methods. |
2309.01240 | Visweswaran Baskaran | Akshaya C S, Karthik Soma, Visweswaran B, Aditya Ravichander and
Venkata Nagarjun PM | Decentralized shape formation and force-based interactive formation
control in robot swarms | 6 pages, 10 figures | null | null | null | cs.MA cs.RO | http://creativecommons.org/licenses/by/4.0/ | Swarm robotic systems utilize collective behaviour to achieve goals that
might be too complex for a lone entity, but become attainable with localized
communication and collective decision making. In this paper, a behaviour-based
distributed approach to shape formation is proposed. Flocking into strategic
formations is observed in migratory birds and fish to avoid predators and also
for energy conservation. The formation is maintained throughout long periods
without collapsing and is advantageous for communicating within the flock.
Similar behaviour can be deployed in multi-agent systems to enhance
coordination within the swarm. Existing methods for formation control are
either dependent on the size and geometry of the formation or rely on
maintaining the formation with a single reference in the swarm (the leader).
These methods are not resilient to failure and involve a high degree of
deformation upon obstacle encounter before the shape is recovered again. To
improve the performance, artificial force-based interaction amongst the
entities of the swarm to maintain shape integrity while encountering obstacles
is elucidated.
| [
{
"created": "Sun, 3 Sep 2023 18:46:39 GMT",
"version": "v1"
}
] | 2023-09-06 | [
[
"S",
"Akshaya C",
""
],
[
"Soma",
"Karthik",
""
],
[
"B",
"Visweswaran",
""
],
[
"Ravichander",
"Aditya",
""
],
[
"PM",
"Venkata Nagarjun",
""
]
] | Swarm robotic systems utilize collective behaviour to achieve goals that might be too complex for a lone entity, but become attainable with localized communication and collective decision making. In this paper, a behaviour-based distributed approach to shape formation is proposed. Flocking into strategic formations is observed in migratory birds and fish to avoid predators and also for energy conservation. The formation is maintained throughout long periods without collapsing and is advantageous for communicating within the flock. Similar behaviour can be deployed in multi-agent systems to enhance coordination within the swarm. Existing methods for formation control are either dependent on the size and geometry of the formation or rely on maintaining the formation with a single reference in the swarm (the leader). These methods are not resilient to failure and involve a high degree of deformation upon obstacle encounter before the shape is recovered again. To improve the performance, artificial force-based interaction amongst the entities of the swarm to maintain shape integrity while encountering obstacles is elucidated. |
2308.11204 | Zihang Liu | Zihang Liu, Le Yu, Tongyu Zhu, Leiei Sun | A Simple Framework for Multi-mode Spatial-Temporal Data Modeling | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Spatial-temporal data modeling aims to mine the underlying spatial
relationships and temporal dependencies of objects in a system. However, most
existing methods focus on the modeling of spatial-temporal data in a single
mode, lacking the understanding of multiple modes. Though very few methods have
been presented to learn the multi-mode relationships recently, they are built
on complicated components with higher model complexities. In this paper, we
propose a simple framework for multi-mode spatial-temporal data modeling to
bring both effectiveness and efficiency together. Specifically, we design a
general cross-mode spatial relationships learning component to adaptively
establish connections between multiple modes and propagate information along
the learned connections. Moreover, we employ multi-layer perceptrons to capture
the temporal dependencies and channel correlations, which are conceptually and
technically succinct. Experiments on three real-world datasets show that our
model can consistently outperform the baselines with lower space and time
complexity, opening up a promising direction for modeling spatial-temporal
data. The generalizability of the cross-mode spatial relationships learning
module is also validated.
| [
{
"created": "Tue, 22 Aug 2023 05:41:20 GMT",
"version": "v1"
}
] | 2023-08-23 | [
[
"Liu",
"Zihang",
""
],
[
"Yu",
"Le",
""
],
[
"Zhu",
"Tongyu",
""
],
[
"Sun",
"Leiei",
""
]
] | Spatial-temporal data modeling aims to mine the underlying spatial relationships and temporal dependencies of objects in a system. However, most existing methods focus on the modeling of spatial-temporal data in a single mode, lacking the understanding of multiple modes. Though very few methods have been presented to learn the multi-mode relationships recently, they are built on complicated components with higher model complexities. In this paper, we propose a simple framework for multi-mode spatial-temporal data modeling to bring both effectiveness and efficiency together. Specifically, we design a general cross-mode spatial relationships learning component to adaptively establish connections between multiple modes and propagate information along the learned connections. Moreover, we employ multi-layer perceptrons to capture the temporal dependencies and channel correlations, which are conceptually and technically succinct. Experiments on three real-world datasets show that our model can consistently outperform the baselines with lower space and time complexity, opening up a promising direction for modeling spatial-temporal data. The generalizability of the cross-mode spatial relationships learning module is also validated. |
2203.15706 | Alec Linot | Alec J. Linot, Joshua W. Burby, Qi Tang, Prasanna Balaprakash, Michael
D. Graham, Romit Maulik | Stabilized Neural Ordinary Differential Equations for Long-Time
Forecasting of Dynamical Systems | null | null | 10.1016/j.jcp.2022.111838 | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | In data-driven modeling of spatiotemporal phenomena careful consideration
often needs to be made in capturing the dynamics of the high wavenumbers. This
problem becomes especially challenging when the system of interest exhibits
shocks or chaotic dynamics. We present a data-driven modeling method that
accurately captures shocks and chaotic dynamics by proposing a novel
architecture, stabilized neural ordinary differential equation (ODE). In our
proposed architecture, we learn the right-hand-side (RHS) of an ODE by adding
the outputs of two NN together where one learns a linear term and the other a
nonlinear term. Specifically, we implement this by training a sparse linear
convolutional NN to learn the linear term and a dense fully-connected nonlinear
NN to learn the nonlinear term. This is in contrast with the standard neural
ODE which involves training only a single NN for learning the RHS. We apply
this setup to the viscous Burgers equation, which exhibits shocked behavior,
and show better short-time tracking and prediction of the energy spectrum at
high wavenumbers than a standard neural ODE. We also find that the stabilized
neural ODE models are much more robust to noisy initial conditions than the
standard neural ODE approach. We also apply this method to chaotic trajectories
of the Kuramoto-Sivashinsky equation. In this case, stabilized neural ODEs keep
long-time trajectories on the attractor, and are highly robust to noisy initial
conditions, while standard neural ODEs fail at achieving either of these
results. We conclude by demonstrating how stabilizing neural ODEs provide a
natural extension for use in reduced-order modeling by projecting the dynamics
onto the eigenvectors of the learned linear term.
| [
{
"created": "Tue, 29 Mar 2022 16:10:34 GMT",
"version": "v1"
},
{
"created": "Tue, 4 Oct 2022 00:03:30 GMT",
"version": "v2"
}
] | 2022-12-28 | [
[
"Linot",
"Alec J.",
""
],
[
"Burby",
"Joshua W.",
""
],
[
"Tang",
"Qi",
""
],
[
"Balaprakash",
"Prasanna",
""
],
[
"Graham",
"Michael D.",
""
],
[
"Maulik",
"Romit",
""
]
] | In data-driven modeling of spatiotemporal phenomena careful consideration often needs to be made in capturing the dynamics of the high wavenumbers. This problem becomes especially challenging when the system of interest exhibits shocks or chaotic dynamics. We present a data-driven modeling method that accurately captures shocks and chaotic dynamics by proposing a novel architecture, stabilized neural ordinary differential equation (ODE). In our proposed architecture, we learn the right-hand-side (RHS) of an ODE by adding the outputs of two NN together where one learns a linear term and the other a nonlinear term. Specifically, we implement this by training a sparse linear convolutional NN to learn the linear term and a dense fully-connected nonlinear NN to learn the nonlinear term. This is in contrast with the standard neural ODE which involves training only a single NN for learning the RHS. We apply this setup to the viscous Burgers equation, which exhibits shocked behavior, and show better short-time tracking and prediction of the energy spectrum at high wavenumbers than a standard neural ODE. We also find that the stabilized neural ODE models are much more robust to noisy initial conditions than the standard neural ODE approach. We also apply this method to chaotic trajectories of the Kuramoto-Sivashinsky equation. In this case, stabilized neural ODEs keep long-time trajectories on the attractor, and are highly robust to noisy initial conditions, while standard neural ODEs fail at achieving either of these results. We conclude by demonstrating how stabilizing neural ODEs provide a natural extension for use in reduced-order modeling by projecting the dynamics onto the eigenvectors of the learned linear term. |
2306.02190 | Sofia Serrano | Sofia Serrano, Jesse Dodge, Noah A. Smith | Stubborn Lexical Bias in Data and Models | ACL Findings 2023 | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | In NLP, recent work has seen increased focus on spurious correlations between
various features and labels in training data, and how these influence model
behavior. However, the presence and effect of such correlations are typically
examined feature by feature. We investigate the cumulative impact on a model of
many such intersecting features. Using a new statistical method, we examine
whether such spurious patterns in data appear in models trained on the data. We
select two tasks -- natural language inference and duplicate-question detection
-- for which any unigram feature on its own should ideally be uninformative,
which gives us a large pool of automatically extracted features with which to
experiment. The large size of this pool allows us to investigate the
intersection of features spuriously associated with (potentially different)
labels. We then apply an optimization approach to *reweight* the training data,
reducing thousands of spurious correlations, and examine how doing so affects
models trained on the reweighted data. Surprisingly, though this method can
successfully reduce lexical biases in the training data, we still find strong
evidence of corresponding bias in the trained models, including worsened bias
for slightly more complex features (bigrams). We close with discussion about
the implications of our results on what it means to "debias" training data, and
how issues of data quality can affect model bias.
| [
{
"created": "Sat, 3 Jun 2023 20:12:27 GMT",
"version": "v1"
}
] | 2023-06-06 | [
[
"Serrano",
"Sofia",
""
],
[
"Dodge",
"Jesse",
""
],
[
"Smith",
"Noah A.",
""
]
] | In NLP, recent work has seen increased focus on spurious correlations between various features and labels in training data, and how these influence model behavior. However, the presence and effect of such correlations are typically examined feature by feature. We investigate the cumulative impact on a model of many such intersecting features. Using a new statistical method, we examine whether such spurious patterns in data appear in models trained on the data. We select two tasks -- natural language inference and duplicate-question detection -- for which any unigram feature on its own should ideally be uninformative, which gives us a large pool of automatically extracted features with which to experiment. The large size of this pool allows us to investigate the intersection of features spuriously associated with (potentially different) labels. We then apply an optimization approach to *reweight* the training data, reducing thousands of spurious correlations, and examine how doing so affects models trained on the reweighted data. Surprisingly, though this method can successfully reduce lexical biases in the training data, we still find strong evidence of corresponding bias in the trained models, including worsened bias for slightly more complex features (bigrams). We close with discussion about the implications of our results on what it means to "debias" training data, and how issues of data quality can affect model bias. |
1501.07422 | Kohta Ishikawa | Kohta Ishikawa, Ikuro Sato, Mitsuru Ambai | Pairwise Rotation Hashing for High-dimensional Features | 16 pages, 8 figures, wrote at Mar 2014 | null | null | null | cs.CV stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Binary Hashing is widely used for effective approximate nearest neighbors
search. Even though various binary hashing methods have been proposed, very few
methods are feasible for extremely high-dimensional features often used in
visual tasks today. We propose a novel highly sparse linear hashing method
based on pairwise rotations. The encoding cost of the proposed algorithm is
$\mathrm{O}(n \log n)$ for n-dimensional features, whereas that of the existing
state-of-the-art method is typically $\mathrm{O}(n^2)$. The proposed method is
also remarkably faster in the learning phase. Along with the efficiency, the
retrieval accuracy is comparable to or slightly outperforming the
state-of-the-art. Pairwise rotations used in our method are formulated from an
analytical study of the trade-off relationship between quantization error and
entropy of binary codes. Although these hashing criteria are widely used in
previous researches, its analytical behavior is rarely studied. All building
blocks of our algorithm are based on the analytical solution, and it thus
provides a fairly simple and efficient procedure.
| [
{
"created": "Thu, 29 Jan 2015 11:50:33 GMT",
"version": "v1"
}
] | 2015-01-30 | [
[
"Ishikawa",
"Kohta",
""
],
[
"Sato",
"Ikuro",
""
],
[
"Ambai",
"Mitsuru",
""
]
] | Binary Hashing is widely used for effective approximate nearest neighbors search. Even though various binary hashing methods have been proposed, very few methods are feasible for extremely high-dimensional features often used in visual tasks today. We propose a novel highly sparse linear hashing method based on pairwise rotations. The encoding cost of the proposed algorithm is $\mathrm{O}(n \log n)$ for n-dimensional features, whereas that of the existing state-of-the-art method is typically $\mathrm{O}(n^2)$. The proposed method is also remarkably faster in the learning phase. Along with the efficiency, the retrieval accuracy is comparable to or slightly outperforming the state-of-the-art. Pairwise rotations used in our method are formulated from an analytical study of the trade-off relationship between quantization error and entropy of binary codes. Although these hashing criteria are widely used in previous researches, its analytical behavior is rarely studied. All building blocks of our algorithm are based on the analytical solution, and it thus provides a fairly simple and efficient procedure. |
cs/0612081 | Manas Tungare | Manas Tungare, Pardha S. Pyla, Manuel P\'erez-Qui\~nones, and Steve
Harrison | Personal Information Ecosystems and Implications for Design | null | null | null | null | cs.HC | null | Today, people use multiple devices to fulfill their information needs.
However, designers design each device individually, without accounting for the
other devices that users may also use. In many cases, the applications on all
these devices are designed to be functional replicates of each other. We argue
that this results in an over-reliance on data synchronization across devices,
version control nightmares, and increased burden of file management. In this
paper, we present the idea of a \textit{personal information ecosystem}, an
analogy to biological ecosystems, which allows us to discuss the
inter-relationships among these devices to fulfill the information needs of the
user. There is a need for designers to design devices as part of a complete
ecosystem, not as independent devices that simply share data replicated across
them. To help us understand this domain and to facilitate the dialogue and
study of such systems, we present the terminology, classifications of the
interdependencies among different devices, and resulting implications for
design.
| [
{
"created": "Mon, 18 Dec 2006 07:53:34 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Tungare",
"Manas",
""
],
[
"Pyla",
"Pardha S.",
""
],
[
"Pérez-Quiñones",
"Manuel",
""
],
[
"Harrison",
"Steve",
""
]
] | Today, people use multiple devices to fulfill their information needs. However, designers design each device individually, without accounting for the other devices that users may also use. In many cases, the applications on all these devices are designed to be functional replicates of each other. We argue that this results in an over-reliance on data synchronization across devices, version control nightmares, and increased burden of file management. In this paper, we present the idea of a \textit{personal information ecosystem}, an analogy to biological ecosystems, which allows us to discuss the inter-relationships among these devices to fulfill the information needs of the user. There is a need for designers to design devices as part of a complete ecosystem, not as independent devices that simply share data replicated across them. To help us understand this domain and to facilitate the dialogue and study of such systems, we present the terminology, classifications of the interdependencies among different devices, and resulting implications for design. |
2305.09527 | Dominik Muhle | Dominik Muhle, Lukas Koestler, Krishna Murthy Jatavallabhula, Daniel
Cremers | Learning Correspondence Uncertainty via Differentiable Nonlinear Least
Squares | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a differentiable nonlinear least squares framework to account for
uncertainty in relative pose estimation from feature correspondences.
Specifically, we introduce a symmetric version of the probabilistic normal
epipolar constraint, and an approach to estimate the covariance of feature
positions by differentiating through the camera pose estimation procedure. We
evaluate our approach on synthetic, as well as the KITTI and EuRoC real-world
datasets. On the synthetic dataset, we confirm that our learned covariances
accurately approximate the true noise distribution. In real world experiments,
we find that our approach consistently outperforms state-of-the-art
non-probabilistic and probabilistic approaches, regardless of the feature
extraction algorithm of choice.
| [
{
"created": "Tue, 16 May 2023 15:21:09 GMT",
"version": "v1"
},
{
"created": "Thu, 18 May 2023 18:35:23 GMT",
"version": "v2"
}
] | 2023-05-22 | [
[
"Muhle",
"Dominik",
""
],
[
"Koestler",
"Lukas",
""
],
[
"Jatavallabhula",
"Krishna Murthy",
""
],
[
"Cremers",
"Daniel",
""
]
] | We propose a differentiable nonlinear least squares framework to account for uncertainty in relative pose estimation from feature correspondences. Specifically, we introduce a symmetric version of the probabilistic normal epipolar constraint, and an approach to estimate the covariance of feature positions by differentiating through the camera pose estimation procedure. We evaluate our approach on synthetic, as well as the KITTI and EuRoC real-world datasets. On the synthetic dataset, we confirm that our learned covariances accurately approximate the true noise distribution. In real world experiments, we find that our approach consistently outperforms state-of-the-art non-probabilistic and probabilistic approaches, regardless of the feature extraction algorithm of choice. |
2206.00169 | Giannis Daras | Giannis Daras and Alexandros G. Dimakis | Discovering the Hidden Vocabulary of DALLE-2 | 6 pages, 4 figures | null | null | null | cs.LG cs.CL cs.CR cs.CV | http://creativecommons.org/licenses/by/4.0/ | We discover that DALLE-2 seems to have a hidden vocabulary that can be used
to generate images with absurd prompts. For example, it seems that
\texttt{Apoploe vesrreaitais} means birds and \texttt{Contarra ccetnxniams
luryca tanniounons} (sometimes) means bugs or pests. We find that these prompts
are often consistent in isolation but also sometimes in combinations. We
present our black-box method to discover words that seem random but have some
correspondence to visual concepts. This creates important security and
interpretability challenges.
| [
{
"created": "Wed, 1 Jun 2022 01:14:48 GMT",
"version": "v1"
}
] | 2022-06-02 | [
[
"Daras",
"Giannis",
""
],
[
"Dimakis",
"Alexandros G.",
""
]
] | We discover that DALLE-2 seems to have a hidden vocabulary that can be used to generate images with absurd prompts. For example, it seems that \texttt{Apoploe vesrreaitais} means birds and \texttt{Contarra ccetnxniams luryca tanniounons} (sometimes) means bugs or pests. We find that these prompts are often consistent in isolation but also sometimes in combinations. We present our black-box method to discover words that seem random but have some correspondence to visual concepts. This creates important security and interpretability challenges. |
2005.05455 | Ron Roth | Ron M. Roth, Paul H. Siegel | Variable-Length Constrained Coding and Kraft Conditions: The
Parity-Preserving Case | Title has been changed, along with minor modification in text | null | null | null | cs.IT math.CO math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Previous work by the authors on parity-preserving fixed-length constrained
encoders is extended to the variable-length case. Parity-preserving
variable-length encoders are formally defined, and, to this end, Kraft
conditions are developed for the parity-preserving variable-length setting.
Then, a necessary and sufficient condition is presented for the existence of
deterministic parity-preserving variable-length encoders for a given
constraint. Examples are provided that show that there are coding ratios where
parity-preserving variable-length encoders exist, while fixed-length encoders
do not.
| [
{
"created": "Mon, 11 May 2020 21:53:22 GMT",
"version": "v1"
},
{
"created": "Thu, 1 Jul 2021 20:53:48 GMT",
"version": "v2"
}
] | 2021-07-05 | [
[
"Roth",
"Ron M.",
""
],
[
"Siegel",
"Paul H.",
""
]
] | Previous work by the authors on parity-preserving fixed-length constrained encoders is extended to the variable-length case. Parity-preserving variable-length encoders are formally defined, and, to this end, Kraft conditions are developed for the parity-preserving variable-length setting. Then, a necessary and sufficient condition is presented for the existence of deterministic parity-preserving variable-length encoders for a given constraint. Examples are provided that show that there are coding ratios where parity-preserving variable-length encoders exist, while fixed-length encoders do not. |
2305.18859 | Jan Mrkos | David Fiedler and Jan Mrkos | Large-scale Ridesharing DARP Instances Based on Real Travel Demand | 8 pages, 9 figures. Submitted to 26th IEEE International Conference
on Intelligent Transportation Systems ITSC 2023. For the published associated
dataset and source codes, see the repository
https://github.com/aicenter/Ridesharing_DARP_instances | null | null | null | cs.AI math.OC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Accurately predicting the real-life performance of algorithms solving the
Dial-a-Ride Problem (DARP) in the context of Mobility on Demand (MoD) systems
with ridesharing requires evaluating them on representative instances. However,
the benchmarking of state-of-the-art DARP solution methods has been limited to
small, artificial instances or outdated non-public instances, hindering direct
comparisons. With the rise of large MoD systems and the availability of open
travel demand datasets for many US cities, there is now an opportunity to
evaluate these algorithms on standardized, realistic, and representative
instances. Despite the significant challenges involved in processing obfuscated
and diverse datasets, we have developed a methodology using which we have
created a comprehensive set of large-scale demand instances based on real-world
data. These instances cover diverse use cases, one of which is demonstrated in
an evaluation of two established DARP methods: the insertion heuristic and
optimal vehicle-group assignment method. We publish the full results of both
methods in a standardized format. The results show significant differences
between areas in all measured quantities, emphasizing the importance of
evaluating methods across different cities.
| [
{
"created": "Tue, 30 May 2023 08:51:11 GMT",
"version": "v1"
}
] | 2023-05-31 | [
[
"Fiedler",
"David",
""
],
[
"Mrkos",
"Jan",
""
]
] | Accurately predicting the real-life performance of algorithms solving the Dial-a-Ride Problem (DARP) in the context of Mobility on Demand (MoD) systems with ridesharing requires evaluating them on representative instances. However, the benchmarking of state-of-the-art DARP solution methods has been limited to small, artificial instances or outdated non-public instances, hindering direct comparisons. With the rise of large MoD systems and the availability of open travel demand datasets for many US cities, there is now an opportunity to evaluate these algorithms on standardized, realistic, and representative instances. Despite the significant challenges involved in processing obfuscated and diverse datasets, we have developed a methodology using which we have created a comprehensive set of large-scale demand instances based on real-world data. These instances cover diverse use cases, one of which is demonstrated in an evaluation of two established DARP methods: the insertion heuristic and optimal vehicle-group assignment method. We publish the full results of both methods in a standardized format. The results show significant differences between areas in all measured quantities, emphasizing the importance of evaluating methods across different cities. |
1809.08198 | Huda Nassar | Huda Nassar, Georgios Kollias, Ananth Grama, David F. Gleich | Low rank methods for multiple network alignment | 17 pages, 10 figures | null | null | null | cs.SI cs.LG physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multiple network alignment is the problem of identifying similar and related
regions in a given set of networks. While there are a large number of effective
techniques for pairwise problems with two networks that scale in terms of
edges, these cannot be readily extended to align multiple networks as the
computational complexity will tend to grow exponentially with the number of
networks.In this paper we introduce a new multiple network alignment algorithm
and framework that is effective at aligning thousands of networks with
thousands of nodes. The key enabling technique of our algorithm is identifying
an exact and easy to compute low-rank tensor structure inside of a principled
heuristic procedure for pairwise network alignment called IsoRank. This can be
combined with a new algorithm for $k$-dimensional matching problems on low-rank
tensors to produce the alignment. We demonstrate results on synthetic and
real-world problems that show our technique (i) is as good or better in terms
of quality as existing methods, when they work on small problems, while running
considerably faster and (ii) is able to scale to aligning a number of networks
unreachable by current methods. We show in this paper that our method is the
realistic choice for aligning multiple networks when no prior information is
present.
| [
{
"created": "Fri, 21 Sep 2018 16:38:36 GMT",
"version": "v1"
}
] | 2018-09-24 | [
[
"Nassar",
"Huda",
""
],
[
"Kollias",
"Georgios",
""
],
[
"Grama",
"Ananth",
""
],
[
"Gleich",
"David F.",
""
]
] | Multiple network alignment is the problem of identifying similar and related regions in a given set of networks. While there are a large number of effective techniques for pairwise problems with two networks that scale in terms of edges, these cannot be readily extended to align multiple networks as the computational complexity will tend to grow exponentially with the number of networks.In this paper we introduce a new multiple network alignment algorithm and framework that is effective at aligning thousands of networks with thousands of nodes. The key enabling technique of our algorithm is identifying an exact and easy to compute low-rank tensor structure inside of a principled heuristic procedure for pairwise network alignment called IsoRank. This can be combined with a new algorithm for $k$-dimensional matching problems on low-rank tensors to produce the alignment. We demonstrate results on synthetic and real-world problems that show our technique (i) is as good or better in terms of quality as existing methods, when they work on small problems, while running considerably faster and (ii) is able to scale to aligning a number of networks unreachable by current methods. We show in this paper that our method is the realistic choice for aligning multiple networks when no prior information is present. |
1806.00428 | Aditya Vora | Aditya Vora | A Classification approach towards Unsupervised Learning of Visual
Representations | null | null | null | null | cs.CV cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we present a technique for unsupervised learning of visual
representations. Specifically, we train a model for foreground and background
classification task, in the process of which it learns visual representations.
Foreground and background patches for training come af- ter mining for such
patches from hundreds and thousands of unlabelled videos available on the web
which we ex- tract using a proposed patch extraction algorithm. With- out using
any supervision, with just using 150, 000 unla- belled videos and the PASCAL
VOC 2007 dataset, we train a object recognition model that achieves 45.3 mAP
which is close to the best performing unsupervised feature learn- ing technique
whereas better than many other proposed al- gorithms. The code for patch
extraction is implemented in Matlab and available open source at the following
link .
| [
{
"created": "Fri, 1 Jun 2018 16:35:08 GMT",
"version": "v1"
}
] | 2018-06-04 | [
[
"Vora",
"Aditya",
""
]
] | In this paper, we present a technique for unsupervised learning of visual representations. Specifically, we train a model for foreground and background classification task, in the process of which it learns visual representations. Foreground and background patches for training come af- ter mining for such patches from hundreds and thousands of unlabelled videos available on the web which we ex- tract using a proposed patch extraction algorithm. With- out using any supervision, with just using 150, 000 unla- belled videos and the PASCAL VOC 2007 dataset, we train a object recognition model that achieves 45.3 mAP which is close to the best performing unsupervised feature learn- ing technique whereas better than many other proposed al- gorithms. The code for patch extraction is implemented in Matlab and available open source at the following link . |
2401.09456 | Denis Shchepakin | Denis Shchepakin, Sreecharan Sankaranarayanan, Dawn Zimmaro | Parametric Constraints for Bayesian Knowledge Tracing from First
Principles | null | null | null | null | cs.CY cs.LG stat.ML | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Bayesian Knowledge Tracing (BKT) is a probabilistic model of a learner's
state of mastery corresponding to a knowledge component. It considers the
learner's state of mastery as a "hidden" or latent binary variable and updates
this state based on the observed correctness of the learner's response using
parameters that represent transition probabilities between states. BKT is often
represented as a Hidden Markov Model and the Expectation-Maximization (EM)
algorithm is used to infer these parameters. However, this algorithm can suffer
from several issues including producing multiple viable sets of parameters,
settling into a local minima, producing degenerate parameter values, and a high
computational cost during fitting. This paper takes a "from first principles"
approach to deriving constraints that can be imposed on the BKT parameter
space. Starting from the basic mathematical truths of probability and building
up to the behaviors expected of the BKT parameters in real systems, this paper
presents a mathematical derivation that results in succinct constraints that
can be imposed on the BKT parameter space. Since these constraints are
necessary conditions, they can be applied prior to fitting in order to reduce
computational cost and the likelihood of issues that can emerge from the EM
procedure. In order to see that promise through, the paper further introduces a
novel algorithm for estimating BKT parameters subject to the newly defined
constraints. While the issue of degenerate parameter values has been reported
previously, this paper is the first, to our best knowledge, to derive the
constrains from first principles while also presenting an algorithm that
respects those constraints.
| [
{
"created": "Sat, 23 Dec 2023 03:58:41 GMT",
"version": "v1"
}
] | 2024-01-19 | [
[
"Shchepakin",
"Denis",
""
],
[
"Sankaranarayanan",
"Sreecharan",
""
],
[
"Zimmaro",
"Dawn",
""
]
] | Bayesian Knowledge Tracing (BKT) is a probabilistic model of a learner's state of mastery corresponding to a knowledge component. It considers the learner's state of mastery as a "hidden" or latent binary variable and updates this state based on the observed correctness of the learner's response using parameters that represent transition probabilities between states. BKT is often represented as a Hidden Markov Model and the Expectation-Maximization (EM) algorithm is used to infer these parameters. However, this algorithm can suffer from several issues including producing multiple viable sets of parameters, settling into a local minima, producing degenerate parameter values, and a high computational cost during fitting. This paper takes a "from first principles" approach to deriving constraints that can be imposed on the BKT parameter space. Starting from the basic mathematical truths of probability and building up to the behaviors expected of the BKT parameters in real systems, this paper presents a mathematical derivation that results in succinct constraints that can be imposed on the BKT parameter space. Since these constraints are necessary conditions, they can be applied prior to fitting in order to reduce computational cost and the likelihood of issues that can emerge from the EM procedure. In order to see that promise through, the paper further introduces a novel algorithm for estimating BKT parameters subject to the newly defined constraints. While the issue of degenerate parameter values has been reported previously, this paper is the first, to our best knowledge, to derive the constrains from first principles while also presenting an algorithm that respects those constraints. |
1511.09360 | Faisal Abu-Khzam | Faisal N. Abu-Khzam | On the Complexity of Multi-Parameterized Cluster Editing | null | null | null | null | cs.DS cs.CC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Cluster Editing problem seeks a transformation of a given undirected
graph into a disjoint union of cliques via a minimum number of edge additions
or deletions. A multi-parameterized version of the problem is studied,
featuring a number of input parameters that bound the amount of both
edge-additions and deletions per single vertex, as well as the size of a
clique-cluster. We show that the problem remains NP-hard even when only one
edge can be deleted and at most two edges can be added per vertex. However, the
new formulation allows us to solve Cluster Editing (exactly) in polynomial time
when the number of edge-edit operations per vertex is smaller than half the
minimum cluster size. In other words, Correlation Clustering can be solved
efficiently when the number of false positives/negatives per single data
element is expected to be small compared to the minimum cluster size. As a
byproduct, we obtain a kernelization algorithm that delivers linear-size
kernels when the two edge-edit bounds are small constants.
| [
{
"created": "Mon, 30 Nov 2015 15:56:47 GMT",
"version": "v1"
}
] | 2015-12-01 | [
[
"Abu-Khzam",
"Faisal N.",
""
]
] | The Cluster Editing problem seeks a transformation of a given undirected graph into a disjoint union of cliques via a minimum number of edge additions or deletions. A multi-parameterized version of the problem is studied, featuring a number of input parameters that bound the amount of both edge-additions and deletions per single vertex, as well as the size of a clique-cluster. We show that the problem remains NP-hard even when only one edge can be deleted and at most two edges can be added per vertex. However, the new formulation allows us to solve Cluster Editing (exactly) in polynomial time when the number of edge-edit operations per vertex is smaller than half the minimum cluster size. In other words, Correlation Clustering can be solved efficiently when the number of false positives/negatives per single data element is expected to be small compared to the minimum cluster size. As a byproduct, we obtain a kernelization algorithm that delivers linear-size kernels when the two edge-edit bounds are small constants. |
1512.06922 | Yinxiao Li | Yinxiao Li and Yonghao Yue and Danfei Xu and Eitan Grinspun and Peter
Allen | Folding Deformable Objects using Predictive Simulation and Trajectory
Optimization | 8 pages, 9 figures, Proceedings of IROS 2015 | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Robotic manipulation of deformable objects remains a challenging task. One
such task is folding a garment autonomously. Given start and end folding
positions, what is an optimal trajectory to move the robotic arm to fold a
garment? Certain trajectories will cause the garment to move, creating
wrinkles, and gaps, other trajectories will fail altogether. We present a novel
solution to find an optimal trajectory that avoids such problematic scenarios.
The trajectory is optimized by minimizing a quadratic objective function in an
off-line simulator, which includes material properties of the garment and
frictional force on the table. The function measures the dissimilarity between
a user folded shape and the folded garment in simulation, which is then used as
an error measurement to create an optimal trajectory. We demonstrate that our
two-arm robot can follow the optimized trajectories, achieving accurate and
efficient manipulations of deformable objects.
| [
{
"created": "Tue, 22 Dec 2015 00:46:47 GMT",
"version": "v1"
}
] | 2015-12-23 | [
[
"Li",
"Yinxiao",
""
],
[
"Yue",
"Yonghao",
""
],
[
"Xu",
"Danfei",
""
],
[
"Grinspun",
"Eitan",
""
],
[
"Allen",
"Peter",
""
]
] | Robotic manipulation of deformable objects remains a challenging task. One such task is folding a garment autonomously. Given start and end folding positions, what is an optimal trajectory to move the robotic arm to fold a garment? Certain trajectories will cause the garment to move, creating wrinkles, and gaps, other trajectories will fail altogether. We present a novel solution to find an optimal trajectory that avoids such problematic scenarios. The trajectory is optimized by minimizing a quadratic objective function in an off-line simulator, which includes material properties of the garment and frictional force on the table. The function measures the dissimilarity between a user folded shape and the folded garment in simulation, which is then used as an error measurement to create an optimal trajectory. We demonstrate that our two-arm robot can follow the optimized trajectories, achieving accurate and efficient manipulations of deformable objects. |
2312.09369 | Dmitriy Serdyuk | Avner May, Dmitriy Serdyuk, Ankit Parag Shah, Otavio Braga, Olivier
Siohan | Audio-visual fine-tuning of audio-only ASR models | null | null | null | null | cs.SD cs.AI eess.AS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Audio-visual automatic speech recognition (AV-ASR) models are very effective
at reducing word error rates on noisy speech, but require large amounts of
transcribed AV training data. Recently, audio-visual self-supervised learning
(SSL) approaches have been developed to reduce this dependence on transcribed
AV data, but these methods are quite complex and computationally expensive. In
this work, we propose replacing these expensive AV-SSL methods with a simple
and fast \textit{audio-only} SSL method, and then performing AV supervised
fine-tuning. We show that this approach is competitive with state-of-the-art
(SOTA) AV-SSL methods on the LRS3-TED benchmark task (within 0.5% absolute
WER), while being dramatically simpler and more efficient (12-30x faster to
pre-train). Furthermore, we show we can extend this approach to convert a SOTA
audio-only ASR model into an AV model. By doing so, we match SOTA AV-SSL
results, even though no AV data was used during pre-training.
| [
{
"created": "Thu, 14 Dec 2023 22:05:15 GMT",
"version": "v1"
}
] | 2023-12-18 | [
[
"May",
"Avner",
""
],
[
"Serdyuk",
"Dmitriy",
""
],
[
"Shah",
"Ankit Parag",
""
],
[
"Braga",
"Otavio",
""
],
[
"Siohan",
"Olivier",
""
]
] | Audio-visual automatic speech recognition (AV-ASR) models are very effective at reducing word error rates on noisy speech, but require large amounts of transcribed AV training data. Recently, audio-visual self-supervised learning (SSL) approaches have been developed to reduce this dependence on transcribed AV data, but these methods are quite complex and computationally expensive. In this work, we propose replacing these expensive AV-SSL methods with a simple and fast \textit{audio-only} SSL method, and then performing AV supervised fine-tuning. We show that this approach is competitive with state-of-the-art (SOTA) AV-SSL methods on the LRS3-TED benchmark task (within 0.5% absolute WER), while being dramatically simpler and more efficient (12-30x faster to pre-train). Furthermore, we show we can extend this approach to convert a SOTA audio-only ASR model into an AV model. By doing so, we match SOTA AV-SSL results, even though no AV data was used during pre-training. |
2406.08754 | HengRui Xing | Bangxin Li and Hengrui Xing and Chao Huang and Jin Qian and Huangqing
Xiao and Linfeng Feng and Cong Tian | Exploiting Uncommon Text-Encoded Structures for Automated Jailbreaks in
LLMs | 12 pages, 4 figures | null | null | null | cs.CL cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large Language Models (LLMs) are widely used in natural language processing
but face the risk of jailbreak attacks that maliciously induce them to generate
harmful content. Existing jailbreak attacks, including character-level and
context-level attacks, mainly focus on the prompt of the plain text without
specifically exploring the significant influence of its structure. In this
paper, we focus on studying how prompt structure contributes to the jailbreak
attack. We introduce a novel structure-level attack method based on tail
structures that are rarely used during LLM training, which we refer to as
Uncommon Text-Encoded Structure (UTES). We extensively study 12 UTESs templates
and 6 obfuscation methods to build an effective automated jailbreak tool named
StructuralSleight that contains three escalating attack strategies: Structural
Attack, Structural and Character/Context Obfuscation Attack, and Fully
Obfuscated Structural Attack. Extensive experiments on existing LLMs show that
StructuralSleight significantly outperforms baseline methods. In particular,
the attack success rate reaches 94.62\% on GPT-4o, which has not been addressed
by state-of-the-art techniques.
| [
{
"created": "Thu, 13 Jun 2024 02:24:08 GMT",
"version": "v1"
},
{
"created": "Fri, 19 Jul 2024 08:23:38 GMT",
"version": "v2"
}
] | 2024-07-22 | [
[
"Li",
"Bangxin",
""
],
[
"Xing",
"Hengrui",
""
],
[
"Huang",
"Chao",
""
],
[
"Qian",
"Jin",
""
],
[
"Xiao",
"Huangqing",
""
],
[
"Feng",
"Linfeng",
""
],
[
"Tian",
"Cong",
""
]
] | Large Language Models (LLMs) are widely used in natural language processing but face the risk of jailbreak attacks that maliciously induce them to generate harmful content. Existing jailbreak attacks, including character-level and context-level attacks, mainly focus on the prompt of the plain text without specifically exploring the significant influence of its structure. In this paper, we focus on studying how prompt structure contributes to the jailbreak attack. We introduce a novel structure-level attack method based on tail structures that are rarely used during LLM training, which we refer to as Uncommon Text-Encoded Structure (UTES). We extensively study 12 UTESs templates and 6 obfuscation methods to build an effective automated jailbreak tool named StructuralSleight that contains three escalating attack strategies: Structural Attack, Structural and Character/Context Obfuscation Attack, and Fully Obfuscated Structural Attack. Extensive experiments on existing LLMs show that StructuralSleight significantly outperforms baseline methods. In particular, the attack success rate reaches 94.62\% on GPT-4o, which has not been addressed by state-of-the-art techniques. |
2311.10732 | Daniel Leiker | Daniel Leiker | White Paper: The Generative Education (GenEd) Framework | null | null | null | null | cs.CY | http://creativecommons.org/licenses/by-nc-sa/4.0/ | The Generative Education (GenEd) Framework explores the transition from Large
Language Models (LLMs) to Large Multimodal Models (LMMs) in education,
envisioning a harmonious relationship between AI and educators to enhance
learning experiences. This paper delves into the potential of LMMs to create
personalized, interactive, and emotionally-aware learning environments. Through
addressing the Two-Sigma problem and the introduction of a conceptual product
named Harmony, the narrative emphasizes educator development, adapting policy
frameworks, and fostering cross-sector collaboration to realize the envisioned
AI-enhanced education landscape. The discussion underscores the urgency for
proactive adaptation amidst AI's evolution, offering a pragmatic roadmap to
navigate the technical, ethical, and policy intricacies of integrating AI in
education.
| [
{
"created": "Mon, 16 Oct 2023 23:30:42 GMT",
"version": "v1"
},
{
"created": "Wed, 22 Nov 2023 16:07:26 GMT",
"version": "v2"
}
] | 2023-11-23 | [
[
"Leiker",
"Daniel",
""
]
] | The Generative Education (GenEd) Framework explores the transition from Large Language Models (LLMs) to Large Multimodal Models (LMMs) in education, envisioning a harmonious relationship between AI and educators to enhance learning experiences. This paper delves into the potential of LMMs to create personalized, interactive, and emotionally-aware learning environments. Through addressing the Two-Sigma problem and the introduction of a conceptual product named Harmony, the narrative emphasizes educator development, adapting policy frameworks, and fostering cross-sector collaboration to realize the envisioned AI-enhanced education landscape. The discussion underscores the urgency for proactive adaptation amidst AI's evolution, offering a pragmatic roadmap to navigate the technical, ethical, and policy intricacies of integrating AI in education. |
1810.01351 | Elena Guti\'errez Viedma | Pierre Ganty and Elena Guti\'errez | The Parikh Property for Weighted Context-Free Grammars | 29 pages, 2 figures, long version of FSTTCS'18 paper | null | 10.4230/LIPIcs.FSTTCS.2018 | null | cs.FL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Parikh's Theorem states that every context-free grammar (CFG) is equivalent
to some regular CFG when the ordering of symbols in the words is ignored. The
same is not true for the so-called weighted CFGs, which additionally assign a
weight to each grammar rule. If the result holds for a given weighted CFG $G$,
we say that $G$ satisfies the Parikh property. We prove constructively that the
Parikh property holds for every weighted nonexpansive CFG. We also give a
decision procedure for the property when the weights are over the rationals.
| [
{
"created": "Tue, 2 Oct 2018 16:22:27 GMT",
"version": "v1"
},
{
"created": "Thu, 27 Dec 2018 14:49:19 GMT",
"version": "v2"
},
{
"created": "Wed, 19 Jun 2019 01:24:47 GMT",
"version": "v3"
}
] | 2019-06-20 | [
[
"Ganty",
"Pierre",
""
],
[
"Gutiérrez",
"Elena",
""
]
] | Parikh's Theorem states that every context-free grammar (CFG) is equivalent to some regular CFG when the ordering of symbols in the words is ignored. The same is not true for the so-called weighted CFGs, which additionally assign a weight to each grammar rule. If the result holds for a given weighted CFG $G$, we say that $G$ satisfies the Parikh property. We prove constructively that the Parikh property holds for every weighted nonexpansive CFG. We also give a decision procedure for the property when the weights are over the rationals. |
1705.01823 | Michael Vanden Boom | Michael Benedikt, Pierre Bourhis, Michael Vanden Boom | Definability and Interpolation within Decidable Fixpoint Logics | null | Logical Methods in Computer Science, Volume 15, Issue 3 (September
10, 2019) lmcs:4729 | 10.23638/LMCS-15(3:29)2019 | null | cs.LO | http://creativecommons.org/licenses/by/4.0/ | We look at characterizing which formulas are expressible in rich decidable
logics such as guarded fixpoint logic, unary negation fixpoint logic, and
guarded negation fixpoint logic. We consider semantic characterizations of
definability, as well as effective characterizations. Our algorithms revolve
around a finer analysis of the tree-model property and a refinement of the
method of moving back and forth between relational logics and logics over
trees.
| [
{
"created": "Thu, 4 May 2017 12:59:24 GMT",
"version": "v1"
},
{
"created": "Tue, 31 Jul 2018 14:28:42 GMT",
"version": "v2"
},
{
"created": "Sun, 2 Jun 2019 07:39:10 GMT",
"version": "v3"
},
{
"created": "Mon, 9 Sep 2019 15:00:17 GMT",
"version": "v4"
}
] | 2023-06-22 | [
[
"Benedikt",
"Michael",
""
],
[
"Bourhis",
"Pierre",
""
],
[
"Boom",
"Michael Vanden",
""
]
] | We look at characterizing which formulas are expressible in rich decidable logics such as guarded fixpoint logic, unary negation fixpoint logic, and guarded negation fixpoint logic. We consider semantic characterizations of definability, as well as effective characterizations. Our algorithms revolve around a finer analysis of the tree-model property and a refinement of the method of moving back and forth between relational logics and logics over trees. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.