id stringlengths 9 10 | submitter stringlengths 1 64 ⌀ | authors stringlengths 4 20.7k | title stringlengths 4 246 | comments stringlengths 1 523 ⌀ | journal-ref stringlengths 4 404 ⌀ | doi stringlengths 11 153 ⌀ | report-no stringlengths 2 254 ⌀ | categories stringlengths 5 98 | license stringclasses 9 values | orig_abstract stringlengths 14 3.35k | versions listlengths 1 60 | update_date stringlengths 10 10 | authors_parsed listlengths 1 1.35k | abstract stringlengths 11 3.34k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2311.09498 | Md Mobasshir Rashid | Md Mobasshir Rashid, Rezaur Rahman, Samiul Hasan | Network Wide Evacuation Traffic Prediction in a Rapidly Intensifying
Hurricane from Traffic Detectors and Facebook Movement Data: A Deep Learning
Approach | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Traffic prediction during hurricane evacuation is essential for optimizing
the use of transportation infrastructures. It can reduce evacuation time by
providing information on future congestion in advance. However, evacuation
traffic prediction can be challenging as evacuation traffic patterns is
significantly different than regular period traffic. A data-driven traffic
prediction model is developed in this study by utilizing traffic detector and
Facebook movement data during Hurricane Ian, a rapidly intensifying hurricane.
We select 766 traffic detectors from Florida's 4 major interstates to collect
traffic features. Additionally, we use Facebook movement data collected during
Hurricane Ian's evacuation period. The deep-learning model is first trained on
regular period (May-August 2022) data to understand regular traffic patterns
and then Hurricane Ian's evacuation period data is used as test data. The model
achieves 95% accuracy (RMSE = 356) during regular period, but it underperforms
with 55% accuracy (RMSE = 1084) during the evacuation period. Then, a transfer
learning approach is adopted where a pretrained model is used with additional
evacuation related features to predict evacuation period traffic. After
transfer learning, the model achieves 89% accuracy (RMSE = 514). Adding
Facebook movement data further reduces model's RMSE value to 393 and increases
accuracy to 93%. The proposed model is capable to forecast traffic up to
6-hours in advance. Evacuation traffic management officials can use the
developed traffic prediction model to anticipate future traffic congestion in
advance and take proactive measures to reduce delays during evacuation.
| [
{
"created": "Thu, 16 Nov 2023 01:50:54 GMT",
"version": "v1"
}
] | 2023-11-17 | [
[
"Rashid",
"Md Mobasshir",
""
],
[
"Rahman",
"Rezaur",
""
],
[
"Hasan",
"Samiul",
""
]
] | Traffic prediction during hurricane evacuation is essential for optimizing the use of transportation infrastructures. It can reduce evacuation time by providing information on future congestion in advance. However, evacuation traffic prediction can be challenging as evacuation traffic patterns is significantly different than regular period traffic. A data-driven traffic prediction model is developed in this study by utilizing traffic detector and Facebook movement data during Hurricane Ian, a rapidly intensifying hurricane. We select 766 traffic detectors from Florida's 4 major interstates to collect traffic features. Additionally, we use Facebook movement data collected during Hurricane Ian's evacuation period. The deep-learning model is first trained on regular period (May-August 2022) data to understand regular traffic patterns and then Hurricane Ian's evacuation period data is used as test data. The model achieves 95% accuracy (RMSE = 356) during regular period, but it underperforms with 55% accuracy (RMSE = 1084) during the evacuation period. Then, a transfer learning approach is adopted where a pretrained model is used with additional evacuation related features to predict evacuation period traffic. After transfer learning, the model achieves 89% accuracy (RMSE = 514). Adding Facebook movement data further reduces model's RMSE value to 393 and increases accuracy to 93%. The proposed model is capable to forecast traffic up to 6-hours in advance. Evacuation traffic management officials can use the developed traffic prediction model to anticipate future traffic congestion in advance and take proactive measures to reduce delays during evacuation. |
2106.14617 | Lucas Cavalcanti | Lucas Cavalcanti, Riei Joaquim, Edna Barros | Optimized Wireless Control and Telemetry Network for Mobile Soccer
Robots | 12 pages, published in RoboCup Symposium 2021 | null | null | null | cs.RO cs.SY eess.SY | http://creativecommons.org/licenses/by-nc-sa/4.0/ | In a diverse set of robotics applications, including RoboCup categories,
mobile robots require control commands to interact with surrounding environment
correctly. These control commands should come wirelessly to not interfere in
robots' movement; also, the communication has a set of requirements, including
low latency and consistent delivery. This paper presents a complete
communication architecture consisting of computer communication with a base
station, which transmits the data to robots and returns robots telemetry to the
computer. With the proposed communication, it is possible to send messages in
less than 4.5ms for six robots with telemetry enables in all of them.
| [
{
"created": "Mon, 28 Jun 2021 12:24:17 GMT",
"version": "v1"
}
] | 2021-06-29 | [
[
"Cavalcanti",
"Lucas",
""
],
[
"Joaquim",
"Riei",
""
],
[
"Barros",
"Edna",
""
]
] | In a diverse set of robotics applications, including RoboCup categories, mobile robots require control commands to interact with surrounding environment correctly. These control commands should come wirelessly to not interfere in robots' movement; also, the communication has a set of requirements, including low latency and consistent delivery. This paper presents a complete communication architecture consisting of computer communication with a base station, which transmits the data to robots and returns robots telemetry to the computer. With the proposed communication, it is possible to send messages in less than 4.5ms for six robots with telemetry enables in all of them. |
1304.2728 | Silvio Ursic | Silvio Ursic | Coefficients of Relations for Probabilistic Reasoning | Appears in Proceedings of the Third Conference on Uncertainty in
Artificial Intelligence (UAI1987) | null | null | UAI-P-1987-PG-156-162 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Definitions and notations with historical references are given for some
numerical coefficients commonly used to quantify relations among collections of
objects for the purpose of expressing approximate knowledge and probabilistic
reasoning.
| [
{
"created": "Wed, 27 Mar 2013 19:47:41 GMT",
"version": "v1"
}
] | 2013-04-11 | [
[
"Ursic",
"Silvio",
""
]
] | Definitions and notations with historical references are given for some numerical coefficients commonly used to quantify relations among collections of objects for the purpose of expressing approximate knowledge and probabilistic reasoning. |
2306.04557 | Jens Behley | Jan Weyler and Federico Magistri and Elias Marks and Yue Linn Chong
and Matteo Sodano and Gianmarco Roggiolani and Nived Chebrolu and Cyrill
Stachniss and Jens Behley | PhenoBench -- A Large Dataset and Benchmarks for Semantic Image
Interpretation in the Agricultural Domain | Accepted by IEEE Transactions on Pattern Analysis and Machine
Intelligence (T-PAMI) | null | 10.1109/TPAMI.2024.3419548 | null | cs.CV cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The production of food, feed, fiber, and fuel is a key task of agriculture,
which has to cope with many challenges in the upcoming decades, e.g., a higher
demand, climate change, lack of workers, and the availability of arable land.
Vision systems can support making better and more sustainable field management
decisions, but also support the breeding of new crop varieties by allowing
temporally dense and reproducible measurements. Recently, agricultural robotics
got an increasing interest in the vision and robotics communities since it is a
promising avenue for coping with the aforementioned lack of workers and
enabling more sustainable production. While large datasets and benchmarks in
other domains are readily available and enable significant progress,
agricultural datasets and benchmarks are comparably rare. We present an
annotated dataset and benchmarks for the semantic interpretation of real
agricultural fields. Our dataset recorded with a UAV provides high-quality,
pixel-wise annotations of crops and weeds, but also crop leaf instances at the
same time. Furthermore, we provide benchmarks for various tasks on a hidden
test set comprised of different fields: known fields covered by the training
data and a completely unseen field. Our dataset, benchmarks, and code are
available at \url{https://www.phenobench.org}.
| [
{
"created": "Wed, 7 Jun 2023 16:04:08 GMT",
"version": "v1"
},
{
"created": "Wed, 24 Jul 2024 12:48:47 GMT",
"version": "v2"
}
] | 2024-07-25 | [
[
"Weyler",
"Jan",
""
],
[
"Magistri",
"Federico",
""
],
[
"Marks",
"Elias",
""
],
[
"Chong",
"Yue Linn",
""
],
[
"Sodano",
"Matteo",
""
],
[
"Roggiolani",
"Gianmarco",
""
],
[
"Chebrolu",
"Nived",
""
],
[
"Stachniss",
"Cyrill",
""
],
[
"Behley",
"Jens",
""
]
] | The production of food, feed, fiber, and fuel is a key task of agriculture, which has to cope with many challenges in the upcoming decades, e.g., a higher demand, climate change, lack of workers, and the availability of arable land. Vision systems can support making better and more sustainable field management decisions, but also support the breeding of new crop varieties by allowing temporally dense and reproducible measurements. Recently, agricultural robotics got an increasing interest in the vision and robotics communities since it is a promising avenue for coping with the aforementioned lack of workers and enabling more sustainable production. While large datasets and benchmarks in other domains are readily available and enable significant progress, agricultural datasets and benchmarks are comparably rare. We present an annotated dataset and benchmarks for the semantic interpretation of real agricultural fields. Our dataset recorded with a UAV provides high-quality, pixel-wise annotations of crops and weeds, but also crop leaf instances at the same time. Furthermore, we provide benchmarks for various tasks on a hidden test set comprised of different fields: known fields covered by the training data and a completely unseen field. Our dataset, benchmarks, and code are available at \url{https://www.phenobench.org}. |
1502.03068 | Sean Weerakkody | Sean Weerakkody, Yilin Mo, Bruno Sinopoli, Duo Han, Ling Shi | Multi-Sensor Scheduling for State Estimation with Event-Based,
Stochastic Triggers | 8 pages, 2 figures | null | null | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In networked systems, state estimation is hampered by communication limits.
Past approaches, which consider scheduling sensors through deterministic
event-triggers, reduce communication and maintain estimation quality. However,
these approaches destroy the Gaussian property of the state, making it
computationally intractable to obtain an exact minimum mean squared error
estimate. We propose a stochastic event-triggered sensor schedule for state
estimation which preserves the Gaussianity of the system, extending previous
results from the single-sensor to the multi-sensor case.
| [
{
"created": "Tue, 10 Feb 2015 20:21:48 GMT",
"version": "v1"
}
] | 2015-02-11 | [
[
"Weerakkody",
"Sean",
""
],
[
"Mo",
"Yilin",
""
],
[
"Sinopoli",
"Bruno",
""
],
[
"Han",
"Duo",
""
],
[
"Shi",
"Ling",
""
]
] | In networked systems, state estimation is hampered by communication limits. Past approaches, which consider scheduling sensors through deterministic event-triggers, reduce communication and maintain estimation quality. However, these approaches destroy the Gaussian property of the state, making it computationally intractable to obtain an exact minimum mean squared error estimate. We propose a stochastic event-triggered sensor schedule for state estimation which preserves the Gaussianity of the system, extending previous results from the single-sensor to the multi-sensor case. |
2401.02843 | Julia Fabienne Sandk\"uhler | Katja Grace, Harlan Stewart, Julia Fabienne Sandk\"uhler, Stephen
Thomas, Ben Weinstein-Raun, Jan Brauner | Thousands of AI Authors on the Future of AI | The asterisk indicates the corresponding author. The dagger indicates
equal contribution | null | null | null | cs.CY cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | In the largest survey of its kind, 2,778 researchers who had published in
top-tier artificial intelligence (AI) venues gave predictions on the pace of AI
progress and the nature and impacts of advanced AI systems The aggregate
forecasts give at least a 50% chance of AI systems achieving several milestones
by 2028, including autonomously constructing a payment processing site from
scratch, creating a song indistinguishable from a new song by a popular
musician, and autonomously downloading and fine-tuning a large language model.
If science continues undisrupted, the chance of unaided machines outperforming
humans in every possible task was estimated at 10% by 2027, and 50% by 2047.
The latter estimate is 13 years earlier than that reached in a similar survey
we conducted only one year earlier [Grace et al., 2022]. However, the chance of
all human occupations becoming fully automatable was forecast to reach 10% by
2037, and 50% as late as 2116 (compared to 2164 in the 2022 survey).
Most respondents expressed substantial uncertainty about the long-term value
of AI progress: While 68.3% thought good outcomes from superhuman AI are more
likely than bad, of these net optimists 48% gave at least a 5% chance of
extremely bad outcomes such as human extinction, and 59% of net pessimists gave
5% or more to extremely good outcomes. Between 38% and 51% of respondents gave
at least a 10% chance to advanced AI leading to outcomes as bad as human
extinction. More than half suggested that "substantial" or "extreme" concern is
warranted about six different AI-related scenarios, including misinformation,
authoritarian control, and inequality. There was disagreement about whether
faster or slower AI progress would be better for the future of humanity.
However, there was broad agreement that research aimed at minimizing potential
risks from AI systems ought to be prioritized more.
| [
{
"created": "Fri, 5 Jan 2024 14:53:09 GMT",
"version": "v1"
},
{
"created": "Tue, 30 Apr 2024 18:15:42 GMT",
"version": "v2"
}
] | 2024-05-02 | [
[
"Grace",
"Katja",
""
],
[
"Stewart",
"Harlan",
""
],
[
"Sandkühler",
"Julia Fabienne",
""
],
[
"Thomas",
"Stephen",
""
],
[
"Weinstein-Raun",
"Ben",
""
],
[
"Brauner",
"Jan",
""
]
] | In the largest survey of its kind, 2,778 researchers who had published in top-tier artificial intelligence (AI) venues gave predictions on the pace of AI progress and the nature and impacts of advanced AI systems The aggregate forecasts give at least a 50% chance of AI systems achieving several milestones by 2028, including autonomously constructing a payment processing site from scratch, creating a song indistinguishable from a new song by a popular musician, and autonomously downloading and fine-tuning a large language model. If science continues undisrupted, the chance of unaided machines outperforming humans in every possible task was estimated at 10% by 2027, and 50% by 2047. The latter estimate is 13 years earlier than that reached in a similar survey we conducted only one year earlier [Grace et al., 2022]. However, the chance of all human occupations becoming fully automatable was forecast to reach 10% by 2037, and 50% as late as 2116 (compared to 2164 in the 2022 survey). Most respondents expressed substantial uncertainty about the long-term value of AI progress: While 68.3% thought good outcomes from superhuman AI are more likely than bad, of these net optimists 48% gave at least a 5% chance of extremely bad outcomes such as human extinction, and 59% of net pessimists gave 5% or more to extremely good outcomes. Between 38% and 51% of respondents gave at least a 10% chance to advanced AI leading to outcomes as bad as human extinction. More than half suggested that "substantial" or "extreme" concern is warranted about six different AI-related scenarios, including misinformation, authoritarian control, and inequality. There was disagreement about whether faster or slower AI progress would be better for the future of humanity. However, there was broad agreement that research aimed at minimizing potential risks from AI systems ought to be prioritized more. |
2302.03976 | Matthew Johnson | Matthew A. Johnson and Stavros Volos and Ken Gordon and Sean T. Allen
and Christoph M. Wintersteiger and Sylvan Clebsch and John Starks and Manuel
Costa | Parma: Confidential Containers via Attested Execution Policies | 12 pages, 6 figures, 2 tables | null | null | null | cs.CR cs.NI cs.OS | http://creativecommons.org/licenses/by/4.0/ | Container-based technologies empower cloud tenants to develop highly portable
software and deploy services in the cloud at a rapid pace. Cloud privacy,
meanwhile, is important as a large number of container deployments operate on
privacy-sensitive data, but challenging due to the increasing frequency and
sophistication of attacks. State-of-the-art confidential container-based
designs leverage process-based trusted execution environments (TEEs), but face
security and compatibility issues that limits their practical deployment. We
propose Parma, an architecture that provides lift-and-shift deployment of
unmodified containers while providing strong security protection against a
powerful attacker who controls the untrusted host and hypervisor. Parma
leverages VM-level isolation to execute a container group within a unique
VM-based TEE. Besides container integrity and user data confidentiality and
integrity, Parma also offers container attestation and execution integrity
based on an attested execution policy. Parma execution policies provide an
inductive proof over all future states of the container group. This proof,
which is established during initialization, forms a root of trust that can be
used for secure operations within the container group without requiring any
modifications of the containerized workflow itself (aside from the inclusion of
the execution policy.) We evaluate Parma on AMD SEV-SNP processors by running a
diverse set of workloads demonstrating that workflows exhibit 0-26% additional
overhead in performance over running outside the enclave, with a mean 13%
overhead on SPEC2017, while requiring no modifications to their program code.
Adding execution policies introduces less than 1% additional overhead.
Furthermore, we have deployed Parma as the underlying technology driving
Confidential Containers on Azure Container Instances.
| [
{
"created": "Wed, 8 Feb 2023 10:15:07 GMT",
"version": "v1"
},
{
"created": "Tue, 28 Feb 2023 10:04:07 GMT",
"version": "v2"
},
{
"created": "Tue, 7 Mar 2023 16:16:33 GMT",
"version": "v3"
}
] | 2023-03-08 | [
[
"Johnson",
"Matthew A.",
""
],
[
"Volos",
"Stavros",
""
],
[
"Gordon",
"Ken",
""
],
[
"Allen",
"Sean T.",
""
],
[
"Wintersteiger",
"Christoph M.",
""
],
[
"Clebsch",
"Sylvan",
""
],
[
"Starks",
"John",
""
],
[
"Costa",
"Manuel",
""
]
] | Container-based technologies empower cloud tenants to develop highly portable software and deploy services in the cloud at a rapid pace. Cloud privacy, meanwhile, is important as a large number of container deployments operate on privacy-sensitive data, but challenging due to the increasing frequency and sophistication of attacks. State-of-the-art confidential container-based designs leverage process-based trusted execution environments (TEEs), but face security and compatibility issues that limits their practical deployment. We propose Parma, an architecture that provides lift-and-shift deployment of unmodified containers while providing strong security protection against a powerful attacker who controls the untrusted host and hypervisor. Parma leverages VM-level isolation to execute a container group within a unique VM-based TEE. Besides container integrity and user data confidentiality and integrity, Parma also offers container attestation and execution integrity based on an attested execution policy. Parma execution policies provide an inductive proof over all future states of the container group. This proof, which is established during initialization, forms a root of trust that can be used for secure operations within the container group without requiring any modifications of the containerized workflow itself (aside from the inclusion of the execution policy.) We evaluate Parma on AMD SEV-SNP processors by running a diverse set of workloads demonstrating that workflows exhibit 0-26% additional overhead in performance over running outside the enclave, with a mean 13% overhead on SPEC2017, while requiring no modifications to their program code. Adding execution policies introduces less than 1% additional overhead. Furthermore, we have deployed Parma as the underlying technology driving Confidential Containers on Azure Container Instances. |
2102.00905 | Benno van den Berg | Benno van den Berg and Martijn den Besten | Quadratic type checking for objective type theory | null | null | null | null | cs.LO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce a modification of standard Martin-Lof type theory in which we
eliminate definitional equality and replace all computation rules by
propositional equalities. We show that type checking for such a system can be
done in quadratic time and that it has a natural homotopy-theoretic semantics.
| [
{
"created": "Mon, 1 Feb 2021 15:28:35 GMT",
"version": "v1"
}
] | 2021-02-02 | [
[
"Berg",
"Benno van den",
""
],
[
"Besten",
"Martijn den",
""
]
] | We introduce a modification of standard Martin-Lof type theory in which we eliminate definitional equality and replace all computation rules by propositional equalities. We show that type checking for such a system can be done in quadratic time and that it has a natural homotopy-theoretic semantics. |
2111.11710 | Sebastian Me\v{z}nar | Sebastian Me\v{z}nar, Matej Bevec, Nada Lavra\v{c}, Bla\v{z} \v{S}krlj | Link Analysis meets Ontologies: Are Embeddings the Answer? | 17 pages, 8 tables, 7 figures | null | null | null | cs.LG cs.AI cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The increasing amounts of semantic resources offer valuable storage of human
knowledge; however, the probability of wrong entries increases with the
increased size. The development of approaches that identify potentially
spurious parts of a given knowledge base is thus becoming an increasingly
important area of interest. In this work, we present a systematic evaluation of
whether structure-only link analysis methods can already offer a scalable means
to detecting possible anomalies, as well as potentially interesting novel
relation candidates. Evaluating thirteen methods on eight different semantic
resources, including Gene Ontology, Food Ontology, Marine Ontology and similar,
we demonstrated that structure-only link analysis could offer scalable anomaly
detection for a subset of the data sets. Further, we demonstrated that by
considering symbolic node embedding, explanations of the predictions (links)
could be obtained, making this branch of methods potentially more valuable than
the black-box only ones. To our knowledge, this is currently one of the most
extensive systematic studies of the applicability of different types of link
analysis methods across semantic resources from different domains.
| [
{
"created": "Tue, 23 Nov 2021 08:05:43 GMT",
"version": "v1"
}
] | 2021-11-24 | [
[
"Mežnar",
"Sebastian",
""
],
[
"Bevec",
"Matej",
""
],
[
"Lavrač",
"Nada",
""
],
[
"Škrlj",
"Blaž",
""
]
] | The increasing amounts of semantic resources offer valuable storage of human knowledge; however, the probability of wrong entries increases with the increased size. The development of approaches that identify potentially spurious parts of a given knowledge base is thus becoming an increasingly important area of interest. In this work, we present a systematic evaluation of whether structure-only link analysis methods can already offer a scalable means to detecting possible anomalies, as well as potentially interesting novel relation candidates. Evaluating thirteen methods on eight different semantic resources, including Gene Ontology, Food Ontology, Marine Ontology and similar, we demonstrated that structure-only link analysis could offer scalable anomaly detection for a subset of the data sets. Further, we demonstrated that by considering symbolic node embedding, explanations of the predictions (links) could be obtained, making this branch of methods potentially more valuable than the black-box only ones. To our knowledge, this is currently one of the most extensive systematic studies of the applicability of different types of link analysis methods across semantic resources from different domains. |
2107.12211 | Yann Fraboni | Yann Fraboni, Richard Vidal, Laetitia Kameni, Marco Lorenzi | A General Theory for Client Sampling in Federated Learning | null | null | null | null | cs.LG cs.AI cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | While client sampling is a central operation of current state-of-the-art
federated learning (FL) approaches, the impact of this procedure on the
convergence and speed of FL remains under-investigated. In this work, we
provide a general theoretical framework to quantify the impact of a client
sampling scheme and of the clients heterogeneity on the federated optimization.
First, we provide a unified theoretical ground for previously reported sampling
schemes experimental results on the relationship between FL convergence and the
variance of the aggregation weights. Second, we prove for the first time that
the quality of FL convergence is also impacted by the resulting covariance
between aggregation weights. Our theory is general, and is here applied to
Multinomial Distribution (MD) and Uniform sampling, two default unbiased client
sampling schemes of FL, and demonstrated through a series of experiments in
non-iid and unbalanced scenarios. Our results suggest that MD sampling should
be used as default sampling scheme, due to the resilience to the changes in
data ratio during the learning process, while Uniform sampling is superior only
in the special case when clients have the same amount of data.
| [
{
"created": "Mon, 26 Jul 2021 13:36:06 GMT",
"version": "v1"
},
{
"created": "Tue, 26 Oct 2021 12:18:12 GMT",
"version": "v2"
},
{
"created": "Wed, 22 Dec 2021 09:02:13 GMT",
"version": "v3"
},
{
"created": "Tue, 14 Jun 2022 18:59:45 GMT",
"version": "v4"
}
] | 2022-06-16 | [
[
"Fraboni",
"Yann",
""
],
[
"Vidal",
"Richard",
""
],
[
"Kameni",
"Laetitia",
""
],
[
"Lorenzi",
"Marco",
""
]
] | While client sampling is a central operation of current state-of-the-art federated learning (FL) approaches, the impact of this procedure on the convergence and speed of FL remains under-investigated. In this work, we provide a general theoretical framework to quantify the impact of a client sampling scheme and of the clients heterogeneity on the federated optimization. First, we provide a unified theoretical ground for previously reported sampling schemes experimental results on the relationship between FL convergence and the variance of the aggregation weights. Second, we prove for the first time that the quality of FL convergence is also impacted by the resulting covariance between aggregation weights. Our theory is general, and is here applied to Multinomial Distribution (MD) and Uniform sampling, two default unbiased client sampling schemes of FL, and demonstrated through a series of experiments in non-iid and unbalanced scenarios. Our results suggest that MD sampling should be used as default sampling scheme, due to the resilience to the changes in data ratio during the learning process, while Uniform sampling is superior only in the special case when clients have the same amount of data. |
2303.11708 | Gabriel Skantze | Gabriel Skantze, A. Seza Do\u{g}ru\"oz | The Open-domain Paradox for Chatbots: Common Ground as the Basis for
Human-like Dialogue | Accepted at SIGDIAL 2023 | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | There is a surge in interest in the development of open-domain chatbots,
driven by the recent advancements of large language models. The "openness" of
the dialogue is expected to be maximized by providing minimal information to
the users about the common ground they can expect, including the presumed joint
activity. However, evidence suggests that the effect is the opposite. Asking
users to "just chat about anything" results in a very narrow form of dialogue,
which we refer to as the "open-domain paradox". In this position paper, we
explain this paradox through the theory of common ground as the basis for
human-like communication. Furthermore, we question the assumptions behind
open-domain chatbots and identify paths forward for enabling common ground in
human-computer dialogue.
| [
{
"created": "Tue, 21 Mar 2023 10:01:49 GMT",
"version": "v1"
},
{
"created": "Fri, 28 Jul 2023 09:22:02 GMT",
"version": "v2"
}
] | 2023-07-31 | [
[
"Skantze",
"Gabriel",
""
],
[
"Doğruöz",
"A. Seza",
""
]
] | There is a surge in interest in the development of open-domain chatbots, driven by the recent advancements of large language models. The "openness" of the dialogue is expected to be maximized by providing minimal information to the users about the common ground they can expect, including the presumed joint activity. However, evidence suggests that the effect is the opposite. Asking users to "just chat about anything" results in a very narrow form of dialogue, which we refer to as the "open-domain paradox". In this position paper, we explain this paradox through the theory of common ground as the basis for human-like communication. Furthermore, we question the assumptions behind open-domain chatbots and identify paths forward for enabling common ground in human-computer dialogue. |
1301.1275 | Luca Allodi | Luca Allodi, Fabio Massacci | My Software has a Vulnerability, should I worry? | 12 pages, 4 figures | ACM TISSEC Vol 17 Issue 1, 2014 | 10.1145/2630069 | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | (U.S) Rule-based policies to mitigate software risk suggest to use the CVSS
score to measure the individual vulnerability risk and act accordingly: an HIGH
CVSS score according to the NVD (National (U.S.) Vulnerability Database) is
therefore translated into a "Yes". A key issue is whether such rule is
economically sensible, in particular if reported vulnerabilities have been
actually exploited in the wild, and whether the risk score do actually match
the risk of actual exploitation.
We compare the NVD dataset with two additional datasets, the EDB for the
white market of vulnerabilities (such as those present in Metasploit), and the
EKITS for the exploits traded in the black market. We benchmark them against
Symantec's threat explorer dataset (SYM) of actual exploit in the wild. We
analyze the whole spectrum of CVSS submetrics and use these characteristics to
perform a case-controlled analysis of CVSS scores (similar to those used to
link lung cancer and smoking) to test its reliability as a risk factor for
actual exploitation.
We conclude that (a) fixing just because a high CVSS score in NVD only yields
negligible risk reduction, (b) the additional existence of proof of concepts
exploits (e.g. in EDB) may yield some additional but not large risk reduction,
(c) fixing in response to presence in black markets yields the equivalent risk
reduction of wearing safety belt in cars (you might also die but still..). On
the negative side, our study shows that as industry we miss a metric with high
specificity (ruling out vulns for which we shouldn't worry).
In order to address the feedback from BlackHat 2013's audience, the final
revision (V3) provides additional data in Appendix A detailing how the control
variables in the study affect the results.
| [
{
"created": "Mon, 7 Jan 2013 17:32:36 GMT",
"version": "v1"
},
{
"created": "Mon, 5 Aug 2013 21:09:41 GMT",
"version": "v2"
},
{
"created": "Tue, 24 Sep 2013 14:05:27 GMT",
"version": "v3"
}
] | 2015-04-13 | [
[
"Allodi",
"Luca",
""
],
[
"Massacci",
"Fabio",
""
]
] | (U.S) Rule-based policies to mitigate software risk suggest to use the CVSS score to measure the individual vulnerability risk and act accordingly: an HIGH CVSS score according to the NVD (National (U.S.) Vulnerability Database) is therefore translated into a "Yes". A key issue is whether such rule is economically sensible, in particular if reported vulnerabilities have been actually exploited in the wild, and whether the risk score do actually match the risk of actual exploitation. We compare the NVD dataset with two additional datasets, the EDB for the white market of vulnerabilities (such as those present in Metasploit), and the EKITS for the exploits traded in the black market. We benchmark them against Symantec's threat explorer dataset (SYM) of actual exploit in the wild. We analyze the whole spectrum of CVSS submetrics and use these characteristics to perform a case-controlled analysis of CVSS scores (similar to those used to link lung cancer and smoking) to test its reliability as a risk factor for actual exploitation. We conclude that (a) fixing just because a high CVSS score in NVD only yields negligible risk reduction, (b) the additional existence of proof of concepts exploits (e.g. in EDB) may yield some additional but not large risk reduction, (c) fixing in response to presence in black markets yields the equivalent risk reduction of wearing safety belt in cars (you might also die but still..). On the negative side, our study shows that as industry we miss a metric with high specificity (ruling out vulns for which we shouldn't worry). In order to address the feedback from BlackHat 2013's audience, the final revision (V3) provides additional data in Appendix A detailing how the control variables in the study affect the results. |
2012.03743 | Marcos Baez | Alessandro Pina, Marcos Baez, Florian Daniel | Bringing Cognitive Augmentation to Web Browsing Accessibility | null | null | null | null | cs.CY cs.AI cs.HC cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we explore the opportunities brought by cognitive augmentation
to provide a more natural and accessible web browsing experience. We explore
these opportunities through \textit{conversational web browsing}, an emerging
interaction paradigm for the Web that enables blind and visually impaired users
(BVIP), as well as regular users, to access the contents and features of
websites through conversational agents. Informed by the literature, our
previous work and prototyping exercises, we derive a conceptual framework for
supporting BVIP conversational web browsing needs, to then focus on the
challenges of automatically providing this support, describing our early work
and prototype that leverage heuristics that consider structural and content
features only.
| [
{
"created": "Mon, 7 Dec 2020 14:40:52 GMT",
"version": "v1"
}
] | 2020-12-08 | [
[
"Pina",
"Alessandro",
""
],
[
"Baez",
"Marcos",
""
],
[
"Daniel",
"Florian",
""
]
] | In this paper we explore the opportunities brought by cognitive augmentation to provide a more natural and accessible web browsing experience. We explore these opportunities through \textit{conversational web browsing}, an emerging interaction paradigm for the Web that enables blind and visually impaired users (BVIP), as well as regular users, to access the contents and features of websites through conversational agents. Informed by the literature, our previous work and prototyping exercises, we derive a conceptual framework for supporting BVIP conversational web browsing needs, to then focus on the challenges of automatically providing this support, describing our early work and prototype that leverage heuristics that consider structural and content features only. |
2010.12948 | Mengjin Dong | Mengjin Dong, Long Xie, Sandhitsu R. Das, Jiancong Wang, Laura E.M.
Wisse, Robin deFlores, David A. Wolk, Paul Yushkevich (for the Alzheimer's
Disease Neuroimaging Initiative) | DeepAtrophy: Teaching a Neural Network to Differentiate Progressive
Changes from Noise on Longitudinal MRI in Alzheimer's Disease | Submitted to a journal, IF ~ 6 | null | null | null | cs.LG eess.IV q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Volume change measures derived from longitudinal MRI (e.g. hippocampal
atrophy) are a well-studied biomarker of disease progression in Alzheimer's
Disease (AD) and are used in clinical trials to track the therapeutic efficacy
of disease-modifying treatments. However, longitudinal MRI change measures can
be confounded by non-biological factors, such as different degrees of head
motion and susceptibility artifact between pairs of MRI scans. We hypothesize
that deep learning methods applied directly to pairs of longitudinal MRI scans
can be trained to differentiate between biological changes and non-biological
factors better than conventional approaches based on deformable image
registration. To achieve this, we make a simplifying assumption that biological
factors are associated with time (i.e. the hippocampus shrinks overtime in the
aging population) whereas non-biological factors are independent of time. We
then formulate deep learning networks to infer the temporal order of
same-subject MRI scans input to the network in arbitrary order; as well as to
infer ratios between interscan intervals for two pairs of same-subject MRI
scans. In the test dataset, these networks perform better in tasks of temporal
ordering (89.3%) and interscan interval inference (86.1%) than a
state-of-the-art deformation-based morphometry method ALOHA (76.6% and 76.1%
respectively) (Das et al., 2012). Furthermore, we derive a disease progression
score from the network that is able to detect a group difference between 58
preclinical AD and 75 beta-amyloid-negative cognitively normal individuals
within one year, compared to two years for ALOHA. This suggests that deep
learning can be trained to differentiate MRI changes due to biological factors
(tissue loss) from changes due to non-biological factors, leading to novel
biomarkers that are more sensitive to longitudinal changes at the earliest
stages of AD.
| [
{
"created": "Sat, 24 Oct 2020 18:23:02 GMT",
"version": "v1"
}
] | 2020-10-27 | [
[
"Dong",
"Mengjin",
"",
"for the Alzheimer's\n Disease Neuroimaging Initiative"
],
[
"Xie",
"Long",
"",
"for the Alzheimer's\n Disease Neuroimaging Initiative"
],
[
"Das",
"Sandhitsu R.",
"",
"for the Alzheimer's\n Disease Neuroimaging Initiative"
],
[
"Wang",
"Jiancong",
"",
"for the Alzheimer's\n Disease Neuroimaging Initiative"
],
[
"Wisse",
"Laura E. M.",
"",
"for the Alzheimer's\n Disease Neuroimaging Initiative"
],
[
"deFlores",
"Robin",
"",
"for the Alzheimer's\n Disease Neuroimaging Initiative"
],
[
"Wolk",
"David A.",
"",
"for the Alzheimer's\n Disease Neuroimaging Initiative"
],
[
"Yushkevich",
"Paul",
"",
"for the Alzheimer's\n Disease Neuroimaging Initiative"
]
] | Volume change measures derived from longitudinal MRI (e.g. hippocampal atrophy) are a well-studied biomarker of disease progression in Alzheimer's Disease (AD) and are used in clinical trials to track the therapeutic efficacy of disease-modifying treatments. However, longitudinal MRI change measures can be confounded by non-biological factors, such as different degrees of head motion and susceptibility artifact between pairs of MRI scans. We hypothesize that deep learning methods applied directly to pairs of longitudinal MRI scans can be trained to differentiate between biological changes and non-biological factors better than conventional approaches based on deformable image registration. To achieve this, we make a simplifying assumption that biological factors are associated with time (i.e. the hippocampus shrinks overtime in the aging population) whereas non-biological factors are independent of time. We then formulate deep learning networks to infer the temporal order of same-subject MRI scans input to the network in arbitrary order; as well as to infer ratios between interscan intervals for two pairs of same-subject MRI scans. In the test dataset, these networks perform better in tasks of temporal ordering (89.3%) and interscan interval inference (86.1%) than a state-of-the-art deformation-based morphometry method ALOHA (76.6% and 76.1% respectively) (Das et al., 2012). Furthermore, we derive a disease progression score from the network that is able to detect a group difference between 58 preclinical AD and 75 beta-amyloid-negative cognitively normal individuals within one year, compared to two years for ALOHA. This suggests that deep learning can be trained to differentiate MRI changes due to biological factors (tissue loss) from changes due to non-biological factors, leading to novel biomarkers that are more sensitive to longitudinal changes at the earliest stages of AD. |
2208.06828 | Li-Yue Sun | John Chiang | Multinomial Logistic Regression Algorithms via Quadratic Gradient | There is a good chance that the enhanced gradient methods for
multiclass LR could be used in the classisation neural-network training via
the softmax activation and the cross-entropy loss | null | null | null | cs.LG math.OC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multinomial logistic regression, also known by other names such as multiclass
logistic regression and softmax regression, is a fundamental classification
method that generalizes binary logistic regression to multiclass problems. A
recently work proposed a faster gradient called $\texttt{quadratic gradient}$
that can accelerate the binary logistic regression training, and presented an
enhanced Nesterov's accelerated gradient (NAG) method for binary logistic
regression.
In this paper, we extend this work to multiclass logistic regression and
propose an enhanced Adaptive Gradient Algorithm (Adagrad) that can accelerate
the original Adagrad method. We test the enhanced NAG method and the enhanced
Adagrad method on some multiclass-problem datasets. Experimental results show
that both enhanced methods converge faster than their original ones
respectively.
| [
{
"created": "Sun, 14 Aug 2022 11:00:27 GMT",
"version": "v1"
},
{
"created": "Wed, 29 Mar 2023 12:10:09 GMT",
"version": "v2"
}
] | 2023-03-30 | [
[
"Chiang",
"John",
""
]
] | Multinomial logistic regression, also known by other names such as multiclass logistic regression and softmax regression, is a fundamental classification method that generalizes binary logistic regression to multiclass problems. A recently work proposed a faster gradient called $\texttt{quadratic gradient}$ that can accelerate the binary logistic regression training, and presented an enhanced Nesterov's accelerated gradient (NAG) method for binary logistic regression. In this paper, we extend this work to multiclass logistic regression and propose an enhanced Adaptive Gradient Algorithm (Adagrad) that can accelerate the original Adagrad method. We test the enhanced NAG method and the enhanced Adagrad method on some multiclass-problem datasets. Experimental results show that both enhanced methods converge faster than their original ones respectively. |
1507.00385 | Ranjit Jhala | Niki Vazou, Alexander Bakst, Ranjit Jhala | Bounded Refinement Types | 14 pages, International Conference on Functional Programming, ICFP
2015 | null | null | null | cs.PL cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a notion of bounded quantification for refinement types and show
how it expands the expressiveness of refinement typing by using it to develop
typed combinators for: (1) relational algebra and safe database access, (2)
Floyd-Hoare logic within a state transformer monad equipped with combinators
for branching and looping, and (3) using the above to implement a refined IO
monad that tracks capabilities and resource usage. This leap in expressiveness
comes via a translation to "ghost" functions, which lets us retain the
automated and decidable SMT based checking and inference that makes refinement
typing effective in practice.
| [
{
"created": "Wed, 1 Jul 2015 22:14:45 GMT",
"version": "v1"
}
] | 2015-07-03 | [
[
"Vazou",
"Niki",
""
],
[
"Bakst",
"Alexander",
""
],
[
"Jhala",
"Ranjit",
""
]
] | We present a notion of bounded quantification for refinement types and show how it expands the expressiveness of refinement typing by using it to develop typed combinators for: (1) relational algebra and safe database access, (2) Floyd-Hoare logic within a state transformer monad equipped with combinators for branching and looping, and (3) using the above to implement a refined IO monad that tracks capabilities and resource usage. This leap in expressiveness comes via a translation to "ghost" functions, which lets us retain the automated and decidable SMT based checking and inference that makes refinement typing effective in practice. |
1911.10737 | Liangchen Liu | Liangchen Liu, Louis Ly, Colin Macdonald, and Yen-Hsi Richard Tsai | Nearest Neighbor Sampling of Point Sets using Rays | 48 pages, 14 figures, accepted to Communication on Applied
Mathematics and Computation (CAMC), Focused Issue in Honor of Prof. Stanley
Osher on the Occasion of His 80th Birthday. Fixed typos and improved
notations | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a new framework for the sampling, compression, and analysis of
distributions of point sets and other geometric objects embedded in Euclidean
spaces. Our approach involves constructing a tensor called the RaySense sketch,
which captures nearest neighbors from the underlying geometry of points along a
set of rays. We explore various operations that can be performed on the
RaySense sketch, leading to different properties and potential applications.
Statistical information about the data set can be extracted from the sketch,
independent of the ray set. Line integrals on point sets can be efficiently
computed using the sketch. We also present several examples illustrating
applications of the proposed strategy in practical scenarios.
| [
{
"created": "Mon, 25 Nov 2019 07:31:54 GMT",
"version": "v1"
},
{
"created": "Fri, 29 Nov 2019 20:09:42 GMT",
"version": "v2"
},
{
"created": "Thu, 10 Nov 2022 01:24:32 GMT",
"version": "v3"
},
{
"created": "Mon, 29 May 2023 23:16:24 GMT",
"version": "v4"
},
{
"created": "Wed, 13 Sep 2023 04:03:20 GMT",
"version": "v5"
}
] | 2023-09-14 | [
[
"Liu",
"Liangchen",
""
],
[
"Ly",
"Louis",
""
],
[
"Macdonald",
"Colin",
""
],
[
"Tsai",
"Yen-Hsi Richard",
""
]
] | We propose a new framework for the sampling, compression, and analysis of distributions of point sets and other geometric objects embedded in Euclidean spaces. Our approach involves constructing a tensor called the RaySense sketch, which captures nearest neighbors from the underlying geometry of points along a set of rays. We explore various operations that can be performed on the RaySense sketch, leading to different properties and potential applications. Statistical information about the data set can be extracted from the sketch, independent of the ray set. Line integrals on point sets can be efficiently computed using the sketch. We also present several examples illustrating applications of the proposed strategy in practical scenarios. |
2404.02383 | Andrew Eckford | Dongliang Jing, Lin Lin, Andrew W. Eckford | Performance Analysis and ISI Mitigation with Imperfect Transmitter in
Molecular Communication | Accepted for publication in IEEE Transactions on Nanobioscience | null | null | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In molecular communication (MC), molecules are released from the transmitter
to convey information. This paper considers a realistic molecule shift keying
(MoSK) scenario with two species of molecule in two reservoirs, where the
molecules are harvested from the environment and placed into different
reservoirs, which are purified by exchanging molecules between the reservoirs.
This process consumes energy, and for a reasonable energy cost, the reservoirs
cannot be pure; thus, our MoSK transmitter is imperfect, releasing mixtures of
both molecules for every symbol, resulting in inter-symbol interference (ISI).
To mitigate ISI, the properties of the receiver are analyzed and a detection
method based on the ratio of different molecules is proposed. Theoretical and
simulation results are provided, showing that with the increase of energy cost,
the system achieves better performance. The good performance of the proposed
detection scheme is also demonstrated.
| [
{
"created": "Wed, 3 Apr 2024 00:55:12 GMT",
"version": "v1"
}
] | 2024-04-04 | [
[
"Jing",
"Dongliang",
""
],
[
"Lin",
"Lin",
""
],
[
"Eckford",
"Andrew W.",
""
]
] | In molecular communication (MC), molecules are released from the transmitter to convey information. This paper considers a realistic molecule shift keying (MoSK) scenario with two species of molecule in two reservoirs, where the molecules are harvested from the environment and placed into different reservoirs, which are purified by exchanging molecules between the reservoirs. This process consumes energy, and for a reasonable energy cost, the reservoirs cannot be pure; thus, our MoSK transmitter is imperfect, releasing mixtures of both molecules for every symbol, resulting in inter-symbol interference (ISI). To mitigate ISI, the properties of the receiver are analyzed and a detection method based on the ratio of different molecules is proposed. Theoretical and simulation results are provided, showing that with the increase of energy cost, the system achieves better performance. The good performance of the proposed detection scheme is also demonstrated. |
cs/9906027 | Gillian Callaghan | Yorick Wilks and Roberta Catizone | Human-Computer Conversation | 14 pages, 1 figure | null | null | CS-99-04 | cs.CL cs.HC | null | The article surveys a little of the history of the technology, sets out the
main current theoretical approaches in brief, and discusses the on-going
opposition between theoretical and empirical approaches. It illustrates the
situation with some discussion of CONVERSE, a system that won the Loebner prize
in 1997 and which displays features of both approaches.
| [
{
"created": "Fri, 25 Jun 1999 11:44:42 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Wilks",
"Yorick",
""
],
[
"Catizone",
"Roberta",
""
]
] | The article surveys a little of the history of the technology, sets out the main current theoretical approaches in brief, and discusses the on-going opposition between theoretical and empirical approaches. It illustrates the situation with some discussion of CONVERSE, a system that won the Loebner prize in 1997 and which displays features of both approaches. |
1207.5774 | Lou Marvin Caraig | Lou Marvin Caraig | A New Training Algorithm for Kanerva's Sparse Distributed Memory | 5 pages, 5 figures | null | null | null | cs.CV cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Sparse Distributed Memory proposed by Pentii Kanerva (SDM in short) was
thought to be a model of human long term memory. The architecture of the SDM
permits to store binary patterns and to retrieve them using partially matching
patterns. However Kanerva's model is especially efficient only in handling
random data. The purpose of this article is to introduce a new approach of
training Kanerva's SDM that can handle efficiently non-random data, and to
provide it the capability to recognize inverted patterns. This approach uses a
signal model which is different from the one proposed for different purposes by
Hely, Willshaw and Hayes in [4]. This article additionally suggests a different
way of creating hard locations in the memory despite the Kanerva's static
model.
| [
{
"created": "Sun, 22 Jul 2012 16:30:07 GMT",
"version": "v1"
},
{
"created": "Thu, 26 Jul 2012 05:04:18 GMT",
"version": "v2"
},
{
"created": "Fri, 27 Jul 2012 08:24:51 GMT",
"version": "v3"
}
] | 2012-07-30 | [
[
"Caraig",
"Lou Marvin",
""
]
] | The Sparse Distributed Memory proposed by Pentii Kanerva (SDM in short) was thought to be a model of human long term memory. The architecture of the SDM permits to store binary patterns and to retrieve them using partially matching patterns. However Kanerva's model is especially efficient only in handling random data. The purpose of this article is to introduce a new approach of training Kanerva's SDM that can handle efficiently non-random data, and to provide it the capability to recognize inverted patterns. This approach uses a signal model which is different from the one proposed for different purposes by Hely, Willshaw and Hayes in [4]. This article additionally suggests a different way of creating hard locations in the memory despite the Kanerva's static model. |
1908.01761 | Shengbin Jia | Shengbin Jia, Yang Xiang | Hybrid Neural Tagging Model for Open Relation Extraction | arXiv admin note: substantial text overlap with arXiv:1809.09408 | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Open relation extraction (ORE) remains a challenge to obtain a semantic
representation by discovering arbitrary relation tuples from the unstructured
text. Conventional methods heavily depend on feature engineering or syntactic
parsing, they are inefficient or error-cascading. Recently, leveraging
supervised deep learning structures to address the ORE task is an
extraordinarily promising way. However, there are two main challenges: (1) The
lack of enough labeled corpus to support supervised training; (2) The
exploration of specific neural architecture that adapts to the characteristics
of open relation extracting. In this paper, to overcome these difficulties, we
build a large-scale, high-quality training corpus in a fully automated way, and
design a tagging scheme to assist in transforming the ORE task into a sequence
tagging processing. Furthermore, we propose a hybrid neural network model
(HNN4ORT) for open relation tagging. The model employs the Ordered Neurons LSTM
to encode potential syntactic information for capturing the associations among
the arguments and relations. It also emerges a novel Dual Aware Mechanism,
including Local-aware Attention and Global-aware Convolution. The dual aware
nesses complement each other so that the model can take the sentence-level
semantics as a global perspective, and at the same time implement salient local
features to achieve sparse annotation. Experimental results on various testing
sets show that our model can achieve state-of-the-art performances compared to
the conventional methods or other neural models.
| [
{
"created": "Fri, 26 Jul 2019 11:29:37 GMT",
"version": "v1"
},
{
"created": "Fri, 16 Aug 2019 08:47:24 GMT",
"version": "v2"
},
{
"created": "Thu, 13 Feb 2020 08:59:31 GMT",
"version": "v3"
}
] | 2020-02-14 | [
[
"Jia",
"Shengbin",
""
],
[
"Xiang",
"Yang",
""
]
] | Open relation extraction (ORE) remains a challenge to obtain a semantic representation by discovering arbitrary relation tuples from the unstructured text. Conventional methods heavily depend on feature engineering or syntactic parsing, they are inefficient or error-cascading. Recently, leveraging supervised deep learning structures to address the ORE task is an extraordinarily promising way. However, there are two main challenges: (1) The lack of enough labeled corpus to support supervised training; (2) The exploration of specific neural architecture that adapts to the characteristics of open relation extracting. In this paper, to overcome these difficulties, we build a large-scale, high-quality training corpus in a fully automated way, and design a tagging scheme to assist in transforming the ORE task into a sequence tagging processing. Furthermore, we propose a hybrid neural network model (HNN4ORT) for open relation tagging. The model employs the Ordered Neurons LSTM to encode potential syntactic information for capturing the associations among the arguments and relations. It also emerges a novel Dual Aware Mechanism, including Local-aware Attention and Global-aware Convolution. The dual aware nesses complement each other so that the model can take the sentence-level semantics as a global perspective, and at the same time implement salient local features to achieve sparse annotation. Experimental results on various testing sets show that our model can achieve state-of-the-art performances compared to the conventional methods or other neural models. |
cs/0006019 | Frankie James | Manny Rayner, Beth Ann Hockey, Frankie James | A Compact Architecture for Dialogue Management Based on Scripts and
Meta-Outputs | null | Language Technology Joint Conference ANLP-NAACL 2000. 29 April - 4
May 2000, Seattle, WA | null | null | cs.CL | null | We describe an architecture for spoken dialogue interfaces to semi-autonomous
systems that transforms speech signals through successive representations of
linguistic, dialogue, and domain knowledge. Each step produces an output, and a
meta-output describing the transformation, with an executable program in a
simple scripting language as the final result. The output/meta-output
distinction permits perspicuous treatment of diverse tasks such as resolving
pronouns, correcting user misconceptions, and optimizing scripts.
| [
{
"created": "Fri, 9 Jun 2000 21:41:54 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Rayner",
"Manny",
""
],
[
"Hockey",
"Beth Ann",
""
],
[
"James",
"Frankie",
""
]
] | We describe an architecture for spoken dialogue interfaces to semi-autonomous systems that transforms speech signals through successive representations of linguistic, dialogue, and domain knowledge. Each step produces an output, and a meta-output describing the transformation, with an executable program in a simple scripting language as the final result. The output/meta-output distinction permits perspicuous treatment of diverse tasks such as resolving pronouns, correcting user misconceptions, and optimizing scripts. |
1809.00970 | Laurent Lejeune | Laurent Lejeune, Jan Grossrieder, Raphael Sznitman | Iterative multi-path tracking for video and volume segmentation with
sparse point supervision | null | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Recent machine learning strategies for segmentation tasks have shown great
ability when trained on large pixel-wise annotated image datasets. It remains a
major challenge however to aggregate such datasets, as the time and monetary
cost associated with collecting extensive annotations is extremely high. This
is particularly the case for generating precise pixel-wise annotations in video
and volumetric image data. To this end, this work presents a novel framework to
produce pixel-wise segmentations using minimal supervision. Our method relies
on 2D point supervision, whereby a single 2D location within an object of
interest is provided on each image of the data. Our method then estimates the
object appearance in a semi-supervised fashion by learning
object-image-specific features and by using these in a semi-supervised learning
framework. Our object model is then used in a graph-based optimization problem
that takes into account all provided locations and the image data in order to
infer the complete pixel-wise segmentation. In practice, we solve this
optimally as a tracking problem using a K-shortest path approach. Both the
object model and segmentation are then refined iteratively to further improve
the final segmentation. We show that by collecting 2D locations using a gaze
tracker, our approach can provide state-of-the-art segmentations on a range of
objects and image modalities (video and 3D volumes), and that these can then be
used to train supervised machine learning classifiers.
| [
{
"created": "Mon, 27 Aug 2018 13:38:50 GMT",
"version": "v1"
}
] | 2021-07-20 | [
[
"Lejeune",
"Laurent",
""
],
[
"Grossrieder",
"Jan",
""
],
[
"Sznitman",
"Raphael",
""
]
] | Recent machine learning strategies for segmentation tasks have shown great ability when trained on large pixel-wise annotated image datasets. It remains a major challenge however to aggregate such datasets, as the time and monetary cost associated with collecting extensive annotations is extremely high. This is particularly the case for generating precise pixel-wise annotations in video and volumetric image data. To this end, this work presents a novel framework to produce pixel-wise segmentations using minimal supervision. Our method relies on 2D point supervision, whereby a single 2D location within an object of interest is provided on each image of the data. Our method then estimates the object appearance in a semi-supervised fashion by learning object-image-specific features and by using these in a semi-supervised learning framework. Our object model is then used in a graph-based optimization problem that takes into account all provided locations and the image data in order to infer the complete pixel-wise segmentation. In practice, we solve this optimally as a tracking problem using a K-shortest path approach. Both the object model and segmentation are then refined iteratively to further improve the final segmentation. We show that by collecting 2D locations using a gaze tracker, our approach can provide state-of-the-art segmentations on a range of objects and image modalities (video and 3D volumes), and that these can then be used to train supervised machine learning classifiers. |
1601.02379 | Alex Lotz | Alex Lotz, Arne Hamann, Ingo L\"utkebohle, Dennis Stampfer, Matthias
Lutz and Christian Schlegel | Modeling Non-Functional Application Domain Constraints for
Component-Based Robotics Software Systems | Presented at DSLRob 2015 (arXiv:1601.00877) | null | null | DSLRob/2015/01 | cs.RO cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Service robots are complex, heterogeneous, software intensive systems built
from components. Recent robotics research trends mainly address isolated
capabilities on functional level. Non-functional properties, such as
responsiveness or deterministic behavior, are addressed only in isolation (if
at all). We argue that handling such non-functional properties on system level
is a crucial next step. We claim that precise control over
application-specific, dynamic execution and interaction behavior of functional
components -- i.e. clear computation and communication semantics on model level
without hidden code-defined parts -- is a key ingredient thereto.
In this paper, we propose modeling concepts for these semantics, and present
a meta-model which (i) enables component developers to implement component
functionalities without presuming application-specific, system-level
attributes, and (ii) enables system integrators to reason about causal
dependencies between components as well as system-level data-flow
characteristics. This allows to control data-propagation semantics and system
properties such as end-to-end latencies during system integration without
breaking component encapsulation.
| [
{
"created": "Mon, 11 Jan 2016 10:14:28 GMT",
"version": "v1"
}
] | 2016-01-12 | [
[
"Lotz",
"Alex",
""
],
[
"Hamann",
"Arne",
""
],
[
"Lütkebohle",
"Ingo",
""
],
[
"Stampfer",
"Dennis",
""
],
[
"Lutz",
"Matthias",
""
],
[
"Schlegel",
"Christian",
""
]
] | Service robots are complex, heterogeneous, software intensive systems built from components. Recent robotics research trends mainly address isolated capabilities on functional level. Non-functional properties, such as responsiveness or deterministic behavior, are addressed only in isolation (if at all). We argue that handling such non-functional properties on system level is a crucial next step. We claim that precise control over application-specific, dynamic execution and interaction behavior of functional components -- i.e. clear computation and communication semantics on model level without hidden code-defined parts -- is a key ingredient thereto. In this paper, we propose modeling concepts for these semantics, and present a meta-model which (i) enables component developers to implement component functionalities without presuming application-specific, system-level attributes, and (ii) enables system integrators to reason about causal dependencies between components as well as system-level data-flow characteristics. This allows to control data-propagation semantics and system properties such as end-to-end latencies during system integration without breaking component encapsulation. |
2305.07842 | Jasmine Roberts | Jasmine Roberts | The AR/VR Technology Stack: A Central Repository of Software Development
Libraries, Platforms, and Tools | null | null | 10.13140/RG.2.2.10465.17769 | null | cs.SE | http://creativecommons.org/licenses/by/4.0/ | A comprehensive repository of software development libraries, platforms, and
tools specifically to the domains of augmented, virtual, and mixed reality.
| [
{
"created": "Sat, 13 May 2023 05:50:26 GMT",
"version": "v1"
}
] | 2023-05-16 | [
[
"Roberts",
"Jasmine",
""
]
] | A comprehensive repository of software development libraries, platforms, and tools specifically to the domains of augmented, virtual, and mixed reality. |
2406.05898 | Mingwei Tang | Mingwei Tang, Meng Liu, Hong Li, Junjie Yang, Chenglin Wei, Boyang Li,
Dai Li, Rengan Xu, Yifan Xu, Zehua Zhang, Xiangyu Wang, Linfeng Liu, Yuelei
Xie, Chengye Liu, Labib Fawaz, Li Li, Hongnan Wang, Bill Zhu, Sri Reddy | Async Learned User Embeddings for Ads Delivery Optimization | Accepted by workshop on Multimodal Representation and Retrieval at
SIGIR 2024, Washington DC | null | null | null | cs.IR cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | In recommendation systems, high-quality user embeddings can capture subtle
preferences, enable precise similarity calculations, and adapt to changing
preferences over time to maintain relevance. The effectiveness of
recommendation systems depends on the quality of user embedding. We propose to
asynchronously learn high fidelity user embeddings for billions of users each
day from sequence based multimodal user activities through a Transformer-like
large scale feature learning module. The async learned user representations
embeddings (ALURE) are further converted to user similarity graphs through
graph learning and then combined with user realtime activities to retrieval
highly related ads candidates for the ads delivery system. Our method shows
significant gains in both offline and online experiments.
| [
{
"created": "Sun, 9 Jun 2024 19:35:20 GMT",
"version": "v1"
},
{
"created": "Sun, 23 Jun 2024 05:43:41 GMT",
"version": "v2"
}
] | 2024-06-25 | [
[
"Tang",
"Mingwei",
""
],
[
"Liu",
"Meng",
""
],
[
"Li",
"Hong",
""
],
[
"Yang",
"Junjie",
""
],
[
"Wei",
"Chenglin",
""
],
[
"Li",
"Boyang",
""
],
[
"Li",
"Dai",
""
],
[
"Xu",
"Rengan",
""
],
[
"Xu",
"Yifan",
""
],
[
"Zhang",
"Zehua",
""
],
[
"Wang",
"Xiangyu",
""
],
[
"Liu",
"Linfeng",
""
],
[
"Xie",
"Yuelei",
""
],
[
"Liu",
"Chengye",
""
],
[
"Fawaz",
"Labib",
""
],
[
"Li",
"Li",
""
],
[
"Wang",
"Hongnan",
""
],
[
"Zhu",
"Bill",
""
],
[
"Reddy",
"Sri",
""
]
] | In recommendation systems, high-quality user embeddings can capture subtle preferences, enable precise similarity calculations, and adapt to changing preferences over time to maintain relevance. The effectiveness of recommendation systems depends on the quality of user embedding. We propose to asynchronously learn high fidelity user embeddings for billions of users each day from sequence based multimodal user activities through a Transformer-like large scale feature learning module. The async learned user representations embeddings (ALURE) are further converted to user similarity graphs through graph learning and then combined with user realtime activities to retrieval highly related ads candidates for the ads delivery system. Our method shows significant gains in both offline and online experiments. |
2112.11953 | Xiao Xu | Xiao Xu, Libo Qin, Kaiji Chen, Guoxing Wu, Linlin Li, Wanxiang Che | Text is no more Enough! A Benchmark for Profile-based Spoken Language
Understanding | Accepted by AAAI 2022 | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Current researches on spoken language understanding (SLU) heavily are limited
to a simple setting: the plain text-based SLU that takes the user utterance as
input and generates its corresponding semantic frames (e.g., intent and slots).
Unfortunately, such a simple setting may fail to work in complex real-world
scenarios when an utterance is semantically ambiguous, which cannot be achieved
by the text-based SLU models. In this paper, we first introduce a new and
important task, Profile-based Spoken Language Understanding (ProSLU), which
requires the model that not only relies on the plain text but also the
supporting profile information to predict the correct intents and slots. To
this end, we further introduce a large-scale human-annotated Chinese dataset
with over 5K utterances and their corresponding supporting profile information
(Knowledge Graph (KG), User Profile (UP), Context Awareness (CA)). In addition,
we evaluate several state-of-the-art baseline models and explore a multi-level
knowledge adapter to effectively incorporate profile information. Experimental
results reveal that all existing text-based SLU models fail to work when the
utterances are semantically ambiguous and our proposed framework can
effectively fuse the supporting information for sentence-level intent detection
and token-level slot filling. Finally, we summarize key challenges and provide
new points for future directions, which hopes to facilitate the research.
| [
{
"created": "Wed, 22 Dec 2021 15:22:17 GMT",
"version": "v1"
},
{
"created": "Thu, 6 Jan 2022 12:26:57 GMT",
"version": "v2"
},
{
"created": "Wed, 12 Jan 2022 15:18:17 GMT",
"version": "v3"
}
] | 2022-01-13 | [
[
"Xu",
"Xiao",
""
],
[
"Qin",
"Libo",
""
],
[
"Chen",
"Kaiji",
""
],
[
"Wu",
"Guoxing",
""
],
[
"Li",
"Linlin",
""
],
[
"Che",
"Wanxiang",
""
]
] | Current researches on spoken language understanding (SLU) heavily are limited to a simple setting: the plain text-based SLU that takes the user utterance as input and generates its corresponding semantic frames (e.g., intent and slots). Unfortunately, such a simple setting may fail to work in complex real-world scenarios when an utterance is semantically ambiguous, which cannot be achieved by the text-based SLU models. In this paper, we first introduce a new and important task, Profile-based Spoken Language Understanding (ProSLU), which requires the model that not only relies on the plain text but also the supporting profile information to predict the correct intents and slots. To this end, we further introduce a large-scale human-annotated Chinese dataset with over 5K utterances and their corresponding supporting profile information (Knowledge Graph (KG), User Profile (UP), Context Awareness (CA)). In addition, we evaluate several state-of-the-art baseline models and explore a multi-level knowledge adapter to effectively incorporate profile information. Experimental results reveal that all existing text-based SLU models fail to work when the utterances are semantically ambiguous and our proposed framework can effectively fuse the supporting information for sentence-level intent detection and token-level slot filling. Finally, we summarize key challenges and provide new points for future directions, which hopes to facilitate the research. |
2004.10700 | Netanel Raviv | Netanel Raviv, Siddharth Jain, Pulakesh Upadhyaya, Jehoshua Bruck, and
Anxiao Jiang | CodNN -- Robust Neural Networks From Coded Classification | To appear in ISIT '20 | null | null | null | cs.LG cs.CR cs.IT math.IT stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep Neural Networks (DNNs) are a revolutionary force in the ongoing
information revolution, and yet their intrinsic properties remain a mystery. In
particular, it is widely known that DNNs are highly sensitive to noise, whether
adversarial or random. This poses a fundamental challenge for hardware
implementations of DNNs, and for their deployment in critical applications such
as autonomous driving. In this paper we construct robust DNNs via error
correcting codes. By our approach, either the data or internal layers of the
DNN are coded with error correcting codes, and successful computation under
noise is guaranteed. Since DNNs can be seen as a layered concatenation of
classification tasks, our research begins with the core task of classifying
noisy coded inputs, and progresses towards robust DNNs. We focus on binary data
and linear codes. Our main result is that the prevalent parity code can
guarantee robustness for a large family of DNNs, which includes the recently
popularized binarized neural networks. Further, we show that the coded
classification problem has a deep connection to Fourier analysis of Boolean
functions. In contrast to existing solutions in the literature, our results do
not rely on altering the training process of the DNN, and provide
mathematically rigorous guarantees rather than experimental evidence.
| [
{
"created": "Wed, 22 Apr 2020 17:07:15 GMT",
"version": "v1"
},
{
"created": "Wed, 29 Apr 2020 22:55:41 GMT",
"version": "v2"
}
] | 2020-05-01 | [
[
"Raviv",
"Netanel",
""
],
[
"Jain",
"Siddharth",
""
],
[
"Upadhyaya",
"Pulakesh",
""
],
[
"Bruck",
"Jehoshua",
""
],
[
"Jiang",
"Anxiao",
""
]
] | Deep Neural Networks (DNNs) are a revolutionary force in the ongoing information revolution, and yet their intrinsic properties remain a mystery. In particular, it is widely known that DNNs are highly sensitive to noise, whether adversarial or random. This poses a fundamental challenge for hardware implementations of DNNs, and for their deployment in critical applications such as autonomous driving. In this paper we construct robust DNNs via error correcting codes. By our approach, either the data or internal layers of the DNN are coded with error correcting codes, and successful computation under noise is guaranteed. Since DNNs can be seen as a layered concatenation of classification tasks, our research begins with the core task of classifying noisy coded inputs, and progresses towards robust DNNs. We focus on binary data and linear codes. Our main result is that the prevalent parity code can guarantee robustness for a large family of DNNs, which includes the recently popularized binarized neural networks. Further, we show that the coded classification problem has a deep connection to Fourier analysis of Boolean functions. In contrast to existing solutions in the literature, our results do not rely on altering the training process of the DNN, and provide mathematically rigorous guarantees rather than experimental evidence. |
2103.02727 | Sydney Katz | Sydney M. Katz, Amir Maleki, Erdem B{\i}y{\i}k, Mykel J. Kochenderfer | Preference-based Learning of Reward Function Features | 8 pages, 8 figures | null | null | null | cs.RO cs.HC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Preference-based learning of reward functions, where the reward function is
learned using comparison data, has been well studied for complex robotic tasks
such as autonomous driving. Existing algorithms have focused on learning reward
functions that are linear in a set of trajectory features. The features are
typically hand-coded, and preference-based learning is used to determine a
particular user's relative weighting for each feature. Designing a
representative set of features to encode reward is challenging and can result
in inaccurate models that fail to model the users' preferences or perform the
task properly. In this paper, we present a method to learn both the relative
weighting among features as well as additional features that help encode a
user's reward function. The additional features are modeled as a neural network
that is trained on the data from pairwise comparison queries. We apply our
methods to a driving scenario used in previous work and compare the predictive
power of our method to that of only hand-coded features. We perform additional
analysis to interpret the learned features and examine the optimal
trajectories. Our results show that adding an additional learned feature to the
reward model enhances both its predictive power and expressiveness, producing
unique results for each user.
| [
{
"created": "Wed, 3 Mar 2021 22:32:43 GMT",
"version": "v1"
}
] | 2021-03-05 | [
[
"Katz",
"Sydney M.",
""
],
[
"Maleki",
"Amir",
""
],
[
"Bıyık",
"Erdem",
""
],
[
"Kochenderfer",
"Mykel J.",
""
]
] | Preference-based learning of reward functions, where the reward function is learned using comparison data, has been well studied for complex robotic tasks such as autonomous driving. Existing algorithms have focused on learning reward functions that are linear in a set of trajectory features. The features are typically hand-coded, and preference-based learning is used to determine a particular user's relative weighting for each feature. Designing a representative set of features to encode reward is challenging and can result in inaccurate models that fail to model the users' preferences or perform the task properly. In this paper, we present a method to learn both the relative weighting among features as well as additional features that help encode a user's reward function. The additional features are modeled as a neural network that is trained on the data from pairwise comparison queries. We apply our methods to a driving scenario used in previous work and compare the predictive power of our method to that of only hand-coded features. We perform additional analysis to interpret the learned features and examine the optimal trajectories. Our results show that adding an additional learned feature to the reward model enhances both its predictive power and expressiveness, producing unique results for each user. |
1401.0708 | Taraka Rama Kasicheyanula | Taraka Rama, Sudheer Kolachina, Lakshmi Bai B | Quantitative methods for Phylogenetic Inference in Historical
Linguistics: An experimental case study of South Central Dravidian | null | Indian Linguistics, Volume 70, 2009 | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/3.0/ | In this paper we examine the usefulness of two classes of algorithms Distance
Methods, Discrete Character Methods (Felsenstein and Felsenstein 2003) widely
used in genetics, for predicting the family relationships among a set of
related languages and therefore, diachronic language change. Applying these
algorithms to the data on the numbers of shared cognates- with-change and
changed as well as unchanged cognates for a group of six languages belonging to
a Dravidian language sub-family given in Krishnamurti et al. (1983), we
observed that the resultant phylogenetic trees are largely in agreement with
the linguistic family tree constructed using the comparative method of
reconstruction with only a few minor differences. Furthermore, we studied these
minor differences and found that they were cases of genuine ambiguity even for
a well-trained historical linguist. We evaluated the trees obtained through our
experiments using a well-defined criterion and report the results here. We
finally conclude that quantitative methods like the ones we examined are quite
useful in predicting family relationships among languages. In addition, we
conclude that a modest degree of confidence attached to the intuition that
there could indeed exist a parallelism between the processes of linguistic and
genetic change is not totally misplaced.
| [
{
"created": "Fri, 3 Jan 2014 20:17:47 GMT",
"version": "v1"
}
] | 2014-01-06 | [
[
"Rama",
"Taraka",
""
],
[
"Kolachina",
"Sudheer",
""
],
[
"B",
"Lakshmi Bai",
""
]
] | In this paper we examine the usefulness of two classes of algorithms Distance Methods, Discrete Character Methods (Felsenstein and Felsenstein 2003) widely used in genetics, for predicting the family relationships among a set of related languages and therefore, diachronic language change. Applying these algorithms to the data on the numbers of shared cognates- with-change and changed as well as unchanged cognates for a group of six languages belonging to a Dravidian language sub-family given in Krishnamurti et al. (1983), we observed that the resultant phylogenetic trees are largely in agreement with the linguistic family tree constructed using the comparative method of reconstruction with only a few minor differences. Furthermore, we studied these minor differences and found that they were cases of genuine ambiguity even for a well-trained historical linguist. We evaluated the trees obtained through our experiments using a well-defined criterion and report the results here. We finally conclude that quantitative methods like the ones we examined are quite useful in predicting family relationships among languages. In addition, we conclude that a modest degree of confidence attached to the intuition that there could indeed exist a parallelism between the processes of linguistic and genetic change is not totally misplaced. |
2309.15268 | Amanda Adkins | Amanda Adkins, Taijing Chen, Joydeep Biswas | ObVi-SLAM: Long-Term Object-Visual SLAM | 8 pages, 7 figures, 1 table plus appendix with 4 figures and 1 table | null | null | null | cs.RO cs.CV | http://creativecommons.org/licenses/by-sa/4.0/ | Robots responsible for tasks over long time scales must be able to localize
consistently and scalably amid geometric, viewpoint, and appearance changes.
Existing visual SLAM approaches rely on low-level feature descriptors that are
not robust to such environmental changes and result in large map sizes that
scale poorly over long-term deployments. In contrast, object detections are
robust to environmental variations and lead to more compact representations,
but most object-based SLAM systems target short-term indoor deployments with
close objects. In this paper, we introduce ObVi-SLAM to overcome these
challenges by leveraging the best of both approaches. ObVi-SLAM uses low-level
visual features for high-quality short-term visual odometry; and to ensure
global, long-term consistency, ObVi-SLAM builds an uncertainty-aware long-term
map of persistent objects and updates it after every deployment. By evaluating
ObVi-SLAM on data from 16 deployment sessions spanning different weather and
lighting conditions, we empirically show that ObVi-SLAM generates accurate
localization estimates consistent over long-time scales in spite of varying
appearance conditions.
| [
{
"created": "Tue, 26 Sep 2023 20:57:35 GMT",
"version": "v1"
},
{
"created": "Sun, 22 Oct 2023 21:10:47 GMT",
"version": "v2"
}
] | 2023-10-24 | [
[
"Adkins",
"Amanda",
""
],
[
"Chen",
"Taijing",
""
],
[
"Biswas",
"Joydeep",
""
]
] | Robots responsible for tasks over long time scales must be able to localize consistently and scalably amid geometric, viewpoint, and appearance changes. Existing visual SLAM approaches rely on low-level feature descriptors that are not robust to such environmental changes and result in large map sizes that scale poorly over long-term deployments. In contrast, object detections are robust to environmental variations and lead to more compact representations, but most object-based SLAM systems target short-term indoor deployments with close objects. In this paper, we introduce ObVi-SLAM to overcome these challenges by leveraging the best of both approaches. ObVi-SLAM uses low-level visual features for high-quality short-term visual odometry; and to ensure global, long-term consistency, ObVi-SLAM builds an uncertainty-aware long-term map of persistent objects and updates it after every deployment. By evaluating ObVi-SLAM on data from 16 deployment sessions spanning different weather and lighting conditions, we empirically show that ObVi-SLAM generates accurate localization estimates consistent over long-time scales in spite of varying appearance conditions. |
2003.11410 | Ana Tanevska | Ana Tanevska, Francesco Rea, Giulio Sandini, Lola Ca\~namero,
Alessandra Sciutti | A Socially Adaptable Framework for Human-Robot Interaction | null | null | 10.3389/frobt.2020.00121 | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In our everyday lives we are accustomed to partake in complex, personalized,
adaptive interactions with our peers. For a social robot to be able to recreate
this same kind of rich, human-like interaction, it should be aware of our needs
and affective states and be capable of continuously adapting its behavior to
them. One proposed solution to this problem would involve the robot to learn
how to select the behaviors that would maximize the pleasantness of the
interaction for its peers, guided by an internal motivation system that would
provide autonomy to its decision-making process. We are interested in studying
how an adaptive robotic framework of this kind would function and personalize
to different users. In addition we explore whether including the element of
adaptability and personalization in a cognitive framework will bring any
additional richness to the human-robot interaction (HRI), or if it will instead
bring uncertainty and unpredictability that would not be accepted by the
robot`s human peers. To this end, we designed a socially-adaptive framework for
the humanoid robot iCub which allows it to perceive and reuse the affective and
interactive signals from the person as input for the adaptation based on
internal social motivation. We propose a comparative interaction study with
iCub where users act as the robot's caretaker, and iCub's social adaptation is
guided by an internal comfort level that varies with the amount of stimuli iCub
receives from its caretaker. We investigate and compare how the internal
dynamics of the robot would be perceived by people in a condition when the
robot does not personalize its interaction, and in a condition where it is
adaptive. Finally, we establish the potential benefits that an adaptive
framework could bring to the context of having repeated interactions with a
humanoid robot.
| [
{
"created": "Wed, 25 Mar 2020 13:53:12 GMT",
"version": "v1"
}
] | 2020-10-26 | [
[
"Tanevska",
"Ana",
""
],
[
"Rea",
"Francesco",
""
],
[
"Sandini",
"Giulio",
""
],
[
"Cañamero",
"Lola",
""
],
[
"Sciutti",
"Alessandra",
""
]
] | In our everyday lives we are accustomed to partake in complex, personalized, adaptive interactions with our peers. For a social robot to be able to recreate this same kind of rich, human-like interaction, it should be aware of our needs and affective states and be capable of continuously adapting its behavior to them. One proposed solution to this problem would involve the robot to learn how to select the behaviors that would maximize the pleasantness of the interaction for its peers, guided by an internal motivation system that would provide autonomy to its decision-making process. We are interested in studying how an adaptive robotic framework of this kind would function and personalize to different users. In addition we explore whether including the element of adaptability and personalization in a cognitive framework will bring any additional richness to the human-robot interaction (HRI), or if it will instead bring uncertainty and unpredictability that would not be accepted by the robot`s human peers. To this end, we designed a socially-adaptive framework for the humanoid robot iCub which allows it to perceive and reuse the affective and interactive signals from the person as input for the adaptation based on internal social motivation. We propose a comparative interaction study with iCub where users act as the robot's caretaker, and iCub's social adaptation is guided by an internal comfort level that varies with the amount of stimuli iCub receives from its caretaker. We investigate and compare how the internal dynamics of the robot would be perceived by people in a condition when the robot does not personalize its interaction, and in a condition where it is adaptive. Finally, we establish the potential benefits that an adaptive framework could bring to the context of having repeated interactions with a humanoid robot. |
2407.08349 | Durga R | Yashwanth Rao, Gaurisankar S, Durga R, Aparna Purayath, Vivek Maik,
Manojkumar Lakshmanan, and Mohanasankar Sivaprakasm | Spine Vision X-Ray Image based GUI Planning of Pedicle Screws Using
Enhanced YOLOv5 for Vertebrae Segmentation | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | In this paper, we propose an innovative Graphical User Interface (GUI) aimed
at improving preoperative planning and intra-operative guidance for precise
spinal screw placement through vertebrae segmentation. The methodology
encompasses both front-end and back-end computations. The front end comprises a
GUI that allows surgeons to precisely adjust the placement of screws on X-Ray
images, thereby improving the simulation of surgical screw insertion in the
patient's spine. On the other hand, the back-end processing involves several
steps, including acquiring spinal X-ray images, performing pre-processing
techniques to reduce noise, and training a neural network model to achieve
real-time segmentation of the vertebrae. The integration of vertebral
segmentation in the GUI ensures precise screw placement, reducing complications
like nerve injury and ultimately improving surgical outcomes. The Spine-Vision
provides a comprehensive solution with innovative features like synchronous
AP-LP planning, accurate screw positioning via vertebrae segmentation,
effective screw visualization, and dynamic position adjustments. This X-ray
image-based GUI workflow emerges as a valuable tool, enhancing precision and
safety in spinal screw placement and planning procedures.
| [
{
"created": "Thu, 11 Jul 2024 09:59:43 GMT",
"version": "v1"
}
] | 2024-07-12 | [
[
"Rao",
"Yashwanth",
""
],
[
"S",
"Gaurisankar",
""
],
[
"R",
"Durga",
""
],
[
"Purayath",
"Aparna",
""
],
[
"Maik",
"Vivek",
""
],
[
"Lakshmanan",
"Manojkumar",
""
],
[
"Sivaprakasm",
"Mohanasankar",
""
]
] | In this paper, we propose an innovative Graphical User Interface (GUI) aimed at improving preoperative planning and intra-operative guidance for precise spinal screw placement through vertebrae segmentation. The methodology encompasses both front-end and back-end computations. The front end comprises a GUI that allows surgeons to precisely adjust the placement of screws on X-Ray images, thereby improving the simulation of surgical screw insertion in the patient's spine. On the other hand, the back-end processing involves several steps, including acquiring spinal X-ray images, performing pre-processing techniques to reduce noise, and training a neural network model to achieve real-time segmentation of the vertebrae. The integration of vertebral segmentation in the GUI ensures precise screw placement, reducing complications like nerve injury and ultimately improving surgical outcomes. The Spine-Vision provides a comprehensive solution with innovative features like synchronous AP-LP planning, accurate screw positioning via vertebrae segmentation, effective screw visualization, and dynamic position adjustments. This X-ray image-based GUI workflow emerges as a valuable tool, enhancing precision and safety in spinal screw placement and planning procedures. |
2002.07505 | Hatem Khalloof | Hatem Khalloof, Wilfried Jakob, Shadi Shahoud, Clemens Duepmeier and
Veit Hagenmeyer | A Scalable Method for Scheduling Distributed Energy Resources using
Parallelized Population-based Metaheuristics | null | null | null | null | cs.DC cs.AI cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent years have seen an increasing integration of distributed renewable
energy resources into existing electric power grids. Due to the uncertain
nature of renewable energy resources, network operators are faced with new
challenges in balancing load and generation. In order to meet the new
requirements, intelligent distributed energy resource plants can be used which
provide as virtual power plants e.g. demand side management or flexible
generation. However, the calculation of an adequate schedule for the unit
commitment of such distributed energy resources is a complex optimization
problem which is typically too complex for standard optimization algorithms if
large numbers of distributed energy resources are considered. For solving such
complex optimization tasks, population-based metaheuristics -- as e.g.
evolutionary algorithms -- represent powerful alternatives. Admittedly,
evolutionary algorithms do require lots of computational power for solving such
problems in a timely manner. One promising solution for this performance
problem is the parallelization of the usually time-consuming evaluation of
alternative solutions. In the present paper, a new generic and highly scalable
parallel method for unit commitment of distributed energy resources using
metaheuristic algorithms is presented. It is based on microservices, container
virtualization and the publish/subscribe messaging paradigm for scheduling
distributed energy resources. Scalability and applicability of the proposed
solution are evaluated by performing parallelized optimizations in a big data
environment for three distinct distributed energy resource scheduling
scenarios. The new method provides cluster or cloud parallelizability and is
able to deal with a comparably large number of distributed energy resources.
The application of the new proposed method results in very good performance for
scaling up optimization speed.
| [
{
"created": "Tue, 18 Feb 2020 11:51:28 GMT",
"version": "v1"
},
{
"created": "Thu, 4 Jun 2020 13:02:27 GMT",
"version": "v2"
}
] | 2020-06-05 | [
[
"Khalloof",
"Hatem",
""
],
[
"Jakob",
"Wilfried",
""
],
[
"Shahoud",
"Shadi",
""
],
[
"Duepmeier",
"Clemens",
""
],
[
"Hagenmeyer",
"Veit",
""
]
] | Recent years have seen an increasing integration of distributed renewable energy resources into existing electric power grids. Due to the uncertain nature of renewable energy resources, network operators are faced with new challenges in balancing load and generation. In order to meet the new requirements, intelligent distributed energy resource plants can be used which provide as virtual power plants e.g. demand side management or flexible generation. However, the calculation of an adequate schedule for the unit commitment of such distributed energy resources is a complex optimization problem which is typically too complex for standard optimization algorithms if large numbers of distributed energy resources are considered. For solving such complex optimization tasks, population-based metaheuristics -- as e.g. evolutionary algorithms -- represent powerful alternatives. Admittedly, evolutionary algorithms do require lots of computational power for solving such problems in a timely manner. One promising solution for this performance problem is the parallelization of the usually time-consuming evaluation of alternative solutions. In the present paper, a new generic and highly scalable parallel method for unit commitment of distributed energy resources using metaheuristic algorithms is presented. It is based on microservices, container virtualization and the publish/subscribe messaging paradigm for scheduling distributed energy resources. Scalability and applicability of the proposed solution are evaluated by performing parallelized optimizations in a big data environment for three distinct distributed energy resource scheduling scenarios. The new method provides cluster or cloud parallelizability and is able to deal with a comparably large number of distributed energy resources. The application of the new proposed method results in very good performance for scaling up optimization speed. |
2004.13876 | Marcos Vin\'icius Treviso | Marcos V. Treviso and Andr\'e F. T. Martins | The Explanation Game: Towards Prediction Explainability through Sparse
Communication | BlackBoxNLP 2020 | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Explainability is a topic of growing importance in NLP. In this work, we
provide a unified perspective of explainability as a communication problem
between an explainer and a layperson about a classifier's decision. We use this
framework to compare several prior approaches for extracting explanations,
including gradient methods, representation erasure, and attention mechanisms,
in terms of their communication success. In addition, we reinterpret these
methods at the light of classical feature selection, and we use this as
inspiration to propose new embedded methods for explainability, through the use
of selective, sparse attention. Experiments in text classification, natural
language entailment, and machine translation, using different configurations of
explainers and laypeople (including both machines and humans), reveal an
advantage of attention-based explainers over gradient and erasure methods.
Furthermore, human evaluation experiments show promising results with post-hoc
explainers trained to optimize communication success and faithfulness.
| [
{
"created": "Tue, 28 Apr 2020 22:27:19 GMT",
"version": "v1"
},
{
"created": "Mon, 12 Oct 2020 08:05:13 GMT",
"version": "v2"
}
] | 2020-10-13 | [
[
"Treviso",
"Marcos V.",
""
],
[
"Martins",
"André F. T.",
""
]
] | Explainability is a topic of growing importance in NLP. In this work, we provide a unified perspective of explainability as a communication problem between an explainer and a layperson about a classifier's decision. We use this framework to compare several prior approaches for extracting explanations, including gradient methods, representation erasure, and attention mechanisms, in terms of their communication success. In addition, we reinterpret these methods at the light of classical feature selection, and we use this as inspiration to propose new embedded methods for explainability, through the use of selective, sparse attention. Experiments in text classification, natural language entailment, and machine translation, using different configurations of explainers and laypeople (including both machines and humans), reveal an advantage of attention-based explainers over gradient and erasure methods. Furthermore, human evaluation experiments show promising results with post-hoc explainers trained to optimize communication success and faithfulness. |
cs/0603034 | Ivan Jos\'e Varzinczak | Andreas Herzig and Ivan Varzinczak | Metatheory of actions: beyond consistency | null | null | null | null | cs.AI | null | Consistency check has been the only criterion for theory evaluation in
logic-based approaches to reasoning about actions. This work goes beyond that
and contributes to the metatheory of actions by investigating what other
properties a good domain description in reasoning about actions should have. We
state some metatheoretical postulates concerning this sore spot. When all
postulates are satisfied together we have a modular action theory. Besides
being easier to understand and more elaboration tolerant in McCarthy's sense,
modular theories have interesting properties. We point out the problems that
arise when the postulates about modularity are violated and propose algorithmic
checks that can help the designer of an action theory to overcome them.
| [
{
"created": "Thu, 9 Mar 2006 10:07:46 GMT",
"version": "v1"
}
] | 2009-09-29 | [
[
"Herzig",
"Andreas",
""
],
[
"Varzinczak",
"Ivan",
""
]
] | Consistency check has been the only criterion for theory evaluation in logic-based approaches to reasoning about actions. This work goes beyond that and contributes to the metatheory of actions by investigating what other properties a good domain description in reasoning about actions should have. We state some metatheoretical postulates concerning this sore spot. When all postulates are satisfied together we have a modular action theory. Besides being easier to understand and more elaboration tolerant in McCarthy's sense, modular theories have interesting properties. We point out the problems that arise when the postulates about modularity are violated and propose algorithmic checks that can help the designer of an action theory to overcome them. |
1804.10848 | Abhishta - | C.G.J. Putman, Abhishta, Lambert J.M. Nieuwenhuis | Business Model of a Botnet | Proceedings of 2018, 26th Euromicro International conference on
Parallel, Distributed, and Network-Based Processing (PDP) | null | 10.1109/PDP2018.2018.00077 | null | cs.CY cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Botnets continue to be an active threat against firms or companies and
individuals worldwide. Previous research regarding botnets has unveiled
information on how the system and their stakeholders operate, but an insight on
the economic structure that supports these stakeholders is lacking. The
objective of this research is to analyse the business model and determine the
revenue stream of a botnet owner. We also study the botnet life-cycle and
determine the costs associated with it on the basis of four case studies. We
conclude that building a full scale cyber army from scratch is very expensive
where as acquiring a previously developed botnet requires a little cost. We
find that initial setup and monthly costs were minimal compared to total
revenue.
| [
{
"created": "Sat, 28 Apr 2018 21:20:57 GMT",
"version": "v1"
}
] | 2018-06-14 | [
[
"Putman",
"C. G. J.",
""
],
[
"Abhishta",
"",
""
],
[
"Nieuwenhuis",
"Lambert J. M.",
""
]
] | Botnets continue to be an active threat against firms or companies and individuals worldwide. Previous research regarding botnets has unveiled information on how the system and their stakeholders operate, but an insight on the economic structure that supports these stakeholders is lacking. The objective of this research is to analyse the business model and determine the revenue stream of a botnet owner. We also study the botnet life-cycle and determine the costs associated with it on the basis of four case studies. We conclude that building a full scale cyber army from scratch is very expensive where as acquiring a previously developed botnet requires a little cost. We find that initial setup and monthly costs were minimal compared to total revenue. |
1604.01464 | Mirza Kibria | Mirza Golam Kibria, Fang Yuan and Fumihide Kojima | Feedback Bits Allocation for Interference Minimization in Cognitive
Radio Communications | Feedback Bits Allocation | IEEE Wireless Commun. Lett., vol. 5, no. 1, pp. 104-107, Feb. 2016 | 10.1109/LWC.2015.2503771 | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This letter studies the limited feedback cognitive radio system, where the
primary users (PU) are interfered by the secondary transmitter (ST) due to the
imperfect beamforming. We propose to allocate the feedback bits among multiple
PUs to minimize the maximum interference caused by the ST, by exploiting the
heterogeneous average channel gains. In addition, we study the problem of
minimizing the total feedback bits under a predefined interference threshold at
the PUs. The solutions with low complexity are proposed for the studied
problems, and the performances of bit allocations are analyzed. Simulation
results validate our analysis and demonstrate that the proposed solutions work
very well in terms of minimizing the maximum interference caused by the ST and
minimizing the total feedback bits under predefined interference threshold at
the PUs for limited feedback CR system.
| [
{
"created": "Wed, 6 Apr 2016 02:09:49 GMT",
"version": "v1"
}
] | 2017-08-16 | [
[
"Kibria",
"Mirza Golam",
""
],
[
"Yuan",
"Fang",
""
],
[
"Kojima",
"Fumihide",
""
]
] | This letter studies the limited feedback cognitive radio system, where the primary users (PU) are interfered by the secondary transmitter (ST) due to the imperfect beamforming. We propose to allocate the feedback bits among multiple PUs to minimize the maximum interference caused by the ST, by exploiting the heterogeneous average channel gains. In addition, we study the problem of minimizing the total feedback bits under a predefined interference threshold at the PUs. The solutions with low complexity are proposed for the studied problems, and the performances of bit allocations are analyzed. Simulation results validate our analysis and demonstrate that the proposed solutions work very well in terms of minimizing the maximum interference caused by the ST and minimizing the total feedback bits under predefined interference threshold at the PUs for limited feedback CR system. |
1309.3132 | Xiao-Bo Jin | Xiao-Bo Jin, Guang-Gang Geng, Dexian Zhang | Combination of Multiple Bipartite Ranking for Web Content Quality
Evaluation | 17 pages, 8 figures, 2 tables | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Web content quality estimation is crucial to various web content processing
applications. Our previous work applied Bagging + C4.5 to achive the best
results on the ECML/PKDD Discovery Challenge 2010, which is the comibination of
many point-wise rankinig models. In this paper, we combine multiple pair-wise
bipartite ranking learner to solve the multi-partite ranking problems for the
web quality estimation. In encoding stage, we present the ternary encoding and
the binary coding extending each rank value to $L - 1$ (L is the number of the
different ranking value). For the decoding, we discuss the combination of
multiple ranking results from multiple bipartite ranking models with the
predefined weighting and the adaptive weighting. The experiments on ECML/PKDD
2010 Discovery Challenge datasets show that \textit{binary coding} +
\textit{predefined weighting} yields the highest performance in all four
combinations and furthermore it is better than the best results reported in
ECML/PKDD 2010 Discovery Challenge competition.
| [
{
"created": "Thu, 12 Sep 2013 12:15:51 GMT",
"version": "v1"
},
{
"created": "Thu, 26 Jun 2014 03:01:13 GMT",
"version": "v2"
}
] | 2014-06-27 | [
[
"Jin",
"Xiao-Bo",
""
],
[
"Geng",
"Guang-Gang",
""
],
[
"Zhang",
"Dexian",
""
]
] | Web content quality estimation is crucial to various web content processing applications. Our previous work applied Bagging + C4.5 to achive the best results on the ECML/PKDD Discovery Challenge 2010, which is the comibination of many point-wise rankinig models. In this paper, we combine multiple pair-wise bipartite ranking learner to solve the multi-partite ranking problems for the web quality estimation. In encoding stage, we present the ternary encoding and the binary coding extending each rank value to $L - 1$ (L is the number of the different ranking value). For the decoding, we discuss the combination of multiple ranking results from multiple bipartite ranking models with the predefined weighting and the adaptive weighting. The experiments on ECML/PKDD 2010 Discovery Challenge datasets show that \textit{binary coding} + \textit{predefined weighting} yields the highest performance in all four combinations and furthermore it is better than the best results reported in ECML/PKDD 2010 Discovery Challenge competition. |
2305.12768 | Jae-woong Lee | Jae-woong Lee, Seongmin Park, Mincheol Yoon, and Jongwuk Lee | uCTRL: Unbiased Contrastive Representation Learning via Alignment and
Uniformity for Collaborative Filtering | SIGIR 2023 | null | null | null | cs.IR cs.AI cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Because implicit user feedback for the collaborative filtering (CF) models is
biased toward popular items, CF models tend to yield recommendation lists with
popularity bias. Previous studies have utilized inverse propensity weighting
(IPW) or causal inference to mitigate this problem. However, they solely employ
pointwise or pairwise loss functions and neglect to adopt a contrastive loss
function for learning meaningful user and item representations. In this paper,
we propose Unbiased ConTrastive Representation Learning (uCTRL), optimizing
alignment and uniformity functions derived from the InfoNCE loss function for
CF models. Specifically, we formulate an unbiased alignment function used in
uCTRL. We also devise a novel IPW estimation method that removes the bias of
both users and items. Despite its simplicity, uCTRL equipped with existing CF
models consistently outperforms state-of-the-art unbiased recommender models,
up to 12.22% for Recall@20 and 16.33% for NDCG@20 gains, on four benchmark
datasets.
| [
{
"created": "Mon, 22 May 2023 06:55:38 GMT",
"version": "v1"
}
] | 2023-05-23 | [
[
"Lee",
"Jae-woong",
""
],
[
"Park",
"Seongmin",
""
],
[
"Yoon",
"Mincheol",
""
],
[
"Lee",
"Jongwuk",
""
]
] | Because implicit user feedback for the collaborative filtering (CF) models is biased toward popular items, CF models tend to yield recommendation lists with popularity bias. Previous studies have utilized inverse propensity weighting (IPW) or causal inference to mitigate this problem. However, they solely employ pointwise or pairwise loss functions and neglect to adopt a contrastive loss function for learning meaningful user and item representations. In this paper, we propose Unbiased ConTrastive Representation Learning (uCTRL), optimizing alignment and uniformity functions derived from the InfoNCE loss function for CF models. Specifically, we formulate an unbiased alignment function used in uCTRL. We also devise a novel IPW estimation method that removes the bias of both users and items. Despite its simplicity, uCTRL equipped with existing CF models consistently outperforms state-of-the-art unbiased recommender models, up to 12.22% for Recall@20 and 16.33% for NDCG@20 gains, on four benchmark datasets. |
2210.00131 | Emily McMilin | Emily McMilin | Underspecification in Language Modeling Tasks: A Causality-Informed
Study of Gendered Pronoun Resolution | 24 pages, 41 figures | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | Modern language modeling tasks are often underspecified: for a given token
prediction, many words may satisfy the user's intent of producing natural
language at inference time, however only one word will minimize the task's loss
function at training time. We introduce a simple causal mechanism to describe
the role underspecification plays in the generation of spurious correlations.
Despite its simplicity, our causal model directly informs the development of
two lightweight black-box evaluation methods, that we apply to gendered pronoun
resolution tasks on a wide range of LLMs to 1) aid in the detection of
inference-time task underspecification by exploiting 2) previously unreported
gender vs. time and gender vs. location spurious correlations on LLMs with a
range of A) sizes: from BERT-base to GPT-4 Turbo Preview, B) pre-training
objectives: from masked & autoregressive language modeling to a mixture of
these objectives, and C) training stages: from pre-training only to
reinforcement learning from human feedback (RLHF). Code and open-source demos
available at https://github.com/2dot71mily/uspec.
| [
{
"created": "Fri, 30 Sep 2022 23:10:11 GMT",
"version": "v1"
},
{
"created": "Tue, 22 Nov 2022 14:50:04 GMT",
"version": "v2"
},
{
"created": "Mon, 17 Jul 2023 17:56:10 GMT",
"version": "v3"
},
{
"created": "Thu, 22 Feb 2024 18:52:15 GMT",
"version": "v4"
}
] | 2024-02-23 | [
[
"McMilin",
"Emily",
""
]
] | Modern language modeling tasks are often underspecified: for a given token prediction, many words may satisfy the user's intent of producing natural language at inference time, however only one word will minimize the task's loss function at training time. We introduce a simple causal mechanism to describe the role underspecification plays in the generation of spurious correlations. Despite its simplicity, our causal model directly informs the development of two lightweight black-box evaluation methods, that we apply to gendered pronoun resolution tasks on a wide range of LLMs to 1) aid in the detection of inference-time task underspecification by exploiting 2) previously unreported gender vs. time and gender vs. location spurious correlations on LLMs with a range of A) sizes: from BERT-base to GPT-4 Turbo Preview, B) pre-training objectives: from masked & autoregressive language modeling to a mixture of these objectives, and C) training stages: from pre-training only to reinforcement learning from human feedback (RLHF). Code and open-source demos available at https://github.com/2dot71mily/uspec. |
1507.06829 | Lisa Posch | Lisa Posch, Arnim Bleier, Philipp Schaer, Markus Strohmaier | The Polylingual Labeled Topic Model | Accepted for publication at KI 2015 (38th edition of the German
Conference on Artificial Intelligence) | null | 10.1007/978-3-319-24489-1_26 | null | cs.CL cs.IR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we present the Polylingual Labeled Topic Model, a model which
combines the characteristics of the existing Polylingual Topic Model and
Labeled LDA. The model accounts for multiple languages with separate topic
distributions for each language while restricting the permitted topics of a
document to a set of predefined labels. We explore the properties of the model
in a two-language setting on a dataset from the social science domain. Our
experiments show that our model outperforms LDA and Labeled LDA in terms of
their held-out perplexity and that it produces semantically coherent topics
which are well interpretable by human subjects.
| [
{
"created": "Fri, 24 Jul 2015 13:01:20 GMT",
"version": "v1"
}
] | 2017-05-03 | [
[
"Posch",
"Lisa",
""
],
[
"Bleier",
"Arnim",
""
],
[
"Schaer",
"Philipp",
""
],
[
"Strohmaier",
"Markus",
""
]
] | In this paper, we present the Polylingual Labeled Topic Model, a model which combines the characteristics of the existing Polylingual Topic Model and Labeled LDA. The model accounts for multiple languages with separate topic distributions for each language while restricting the permitted topics of a document to a set of predefined labels. We explore the properties of the model in a two-language setting on a dataset from the social science domain. Our experiments show that our model outperforms LDA and Labeled LDA in terms of their held-out perplexity and that it produces semantically coherent topics which are well interpretable by human subjects. |
2303.12241 | Shiqiang Du | Kaiwu Zhang, Shiqiang Du, Baokai Liu, and Shengxia Gao | Preventing Dimensional Collapse of Incomplete Multi-View Clustering via
Direct Contrastive Learning | null | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | Incomplete multi-view clustering (IMVC) is an unsupervised approach, among
which IMVC via contrastive learning has received attention due to its excellent
performance. The previous methods have the following problems: 1) Over-reliance
on additional projection heads when solving the dimensional collapse problem in
which latent features are only valid in lower-dimensional subspaces during
clustering. However, many parameters in the projection heads are unnecessary.
2) The recovered view contain inconsistent private information and useless
private information will mislead the learning of common semantics due to
consistent learning and reconstruction learning on the same feature. To address
the above issues, we propose a novel incomplete multi-view contrastive
clustering framework. This framework directly optimizes the latent feature
subspace, utilizes the learned feature vectors and their sub-vectors for
reconstruction learning and consistency learning, thereby effectively avoiding
dimensional collapse without relying on projection heads. Since reconstruction
loss and contrastive loss are performed on different features, the adverse
effect of useless private information is reduced. For the incomplete data, the
missing information is recovered by the cross-view prediction mechanism and the
inconsistent information from different views is discarded by the minimum
conditional entropy to further avoid the influence of private information.
Extensive experimental results of the method on 5 public datasets show that the
method achieves state-of-the-art clustering results.
| [
{
"created": "Wed, 22 Mar 2023 00:21:50 GMT",
"version": "v1"
}
] | 2023-03-23 | [
[
"Zhang",
"Kaiwu",
""
],
[
"Du",
"Shiqiang",
""
],
[
"Liu",
"Baokai",
""
],
[
"Gao",
"Shengxia",
""
]
] | Incomplete multi-view clustering (IMVC) is an unsupervised approach, among which IMVC via contrastive learning has received attention due to its excellent performance. The previous methods have the following problems: 1) Over-reliance on additional projection heads when solving the dimensional collapse problem in which latent features are only valid in lower-dimensional subspaces during clustering. However, many parameters in the projection heads are unnecessary. 2) The recovered view contain inconsistent private information and useless private information will mislead the learning of common semantics due to consistent learning and reconstruction learning on the same feature. To address the above issues, we propose a novel incomplete multi-view contrastive clustering framework. This framework directly optimizes the latent feature subspace, utilizes the learned feature vectors and their sub-vectors for reconstruction learning and consistency learning, thereby effectively avoiding dimensional collapse without relying on projection heads. Since reconstruction loss and contrastive loss are performed on different features, the adverse effect of useless private information is reduced. For the incomplete data, the missing information is recovered by the cross-view prediction mechanism and the inconsistent information from different views is discarded by the minimum conditional entropy to further avoid the influence of private information. Extensive experimental results of the method on 5 public datasets show that the method achieves state-of-the-art clustering results. |
2001.00003 | Chengyue Jiang | Chengyue Jiang, Zhonglin Nian, Kaihao Guo, Shanbo Chu, Yinggong Zhao,
Libin Shen, Kewei Tu | Learning Numeral Embeddings | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Word embedding is an essential building block for deep learning methods for
natural language processing. Although word embedding has been extensively
studied over the years, the problem of how to effectively embed numerals, a
special subset of words, is still underexplored. Existing word embedding
methods do not learn numeral embeddings well because there are an infinite
number of numerals and their individual appearances in training corpora are
highly scarce. In this paper, we propose two novel numeral embedding methods
that can handle the out-of-vocabulary (OOV) problem for numerals. We first
induce a finite set of prototype numerals using either a self-organizing map or
a Gaussian mixture model. We then represent the embedding of a numeral as a
weighted average of the prototype number embeddings. Numeral embeddings
represented in this manner can be plugged into existing word embedding learning
approaches such as skip-gram for training. We evaluated our methods and showed
its effectiveness on four intrinsic and extrinsic tasks: word similarity,
embedding numeracy, numeral prediction, and sequence labeling.
| [
{
"created": "Sat, 28 Dec 2019 03:15:43 GMT",
"version": "v1"
},
{
"created": "Wed, 8 Jan 2020 13:57:12 GMT",
"version": "v2"
},
{
"created": "Sat, 11 Jan 2020 14:00:55 GMT",
"version": "v3"
}
] | 2020-01-14 | [
[
"Jiang",
"Chengyue",
""
],
[
"Nian",
"Zhonglin",
""
],
[
"Guo",
"Kaihao",
""
],
[
"Chu",
"Shanbo",
""
],
[
"Zhao",
"Yinggong",
""
],
[
"Shen",
"Libin",
""
],
[
"Tu",
"Kewei",
""
]
] | Word embedding is an essential building block for deep learning methods for natural language processing. Although word embedding has been extensively studied over the years, the problem of how to effectively embed numerals, a special subset of words, is still underexplored. Existing word embedding methods do not learn numeral embeddings well because there are an infinite number of numerals and their individual appearances in training corpora are highly scarce. In this paper, we propose two novel numeral embedding methods that can handle the out-of-vocabulary (OOV) problem for numerals. We first induce a finite set of prototype numerals using either a self-organizing map or a Gaussian mixture model. We then represent the embedding of a numeral as a weighted average of the prototype number embeddings. Numeral embeddings represented in this manner can be plugged into existing word embedding learning approaches such as skip-gram for training. We evaluated our methods and showed its effectiveness on four intrinsic and extrinsic tasks: word similarity, embedding numeracy, numeral prediction, and sequence labeling. |
2304.04172 | Kfir Levy Yehuda | Kfir Y. Levy | $\mu^2$-SGD: Stable Stochastic Optimization via a Double Momentum
Mechanism | null | null | null | null | cs.LG math.OC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider stochastic convex optimization problems where the objective is an
expectation over smooth functions. For this setting we suggest a novel gradient
estimate that combines two recent mechanism that are related to notion of
momentum. Then, we design an SGD-style algorithm as well as an accelerated
version that make use of this new estimator, and demonstrate the robustness of
these new approaches to the choice of the learning rate. Concretely, we show
that these approaches obtain the optimal convergence rates for both noiseless
and noisy case with the same choice of fixed learning rate. Moreover, for the
noisy case we show that these approaches achieve the same optimal bound for a
very wide range of learning rates.
| [
{
"created": "Sun, 9 Apr 2023 06:18:34 GMT",
"version": "v1"
}
] | 2023-04-11 | [
[
"Levy",
"Kfir Y.",
""
]
] | We consider stochastic convex optimization problems where the objective is an expectation over smooth functions. For this setting we suggest a novel gradient estimate that combines two recent mechanism that are related to notion of momentum. Then, we design an SGD-style algorithm as well as an accelerated version that make use of this new estimator, and demonstrate the robustness of these new approaches to the choice of the learning rate. Concretely, we show that these approaches obtain the optimal convergence rates for both noiseless and noisy case with the same choice of fixed learning rate. Moreover, for the noisy case we show that these approaches achieve the same optimal bound for a very wide range of learning rates. |
2406.05849 | Jianhao Zheng | Jianhao Zheng, Daniel Barath, Marc Pollefeys, Iro Armeni | MAP-ADAPT: Real-Time Quality-Adaptive Semantic 3D Maps | null | null | null | null | cs.RO | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Creating 3D semantic reconstructions of environments is fundamental to many
applications, especially when related to autonomous agent operation (e.g.,
goal-oriented navigation or object interaction and manipulation). Commonly, 3D
semantic reconstruction systems capture the entire scene in the same level of
detail. However, certain tasks (e.g., object interaction) require a
fine-grained and high-resolution map, particularly if the objects to interact
are of small size or intricate geometry. In recent practice, this leads to the
entire map being in the same high-quality resolution, which results in
increased computational and storage costs. To address this challenge, we
propose MAP-ADAPT, a real-time method for quality-adaptive semantic 3D
reconstruction using RGBD frames. MAP-ADAPT is the first adaptive semantic 3D
mapping algorithm that, unlike prior work, generates directly a single map with
regions of different quality based on both the semantic information and the
geometric complexity of the scene. Leveraging a semantic SLAM pipeline for pose
and semantic estimation, we achieve comparable or superior results to
state-of-the-art methods on synthetic and real-world data, while significantly
reducing storage and computation requirements.
| [
{
"created": "Sun, 9 Jun 2024 16:48:27 GMT",
"version": "v1"
}
] | 2024-06-11 | [
[
"Zheng",
"Jianhao",
""
],
[
"Barath",
"Daniel",
""
],
[
"Pollefeys",
"Marc",
""
],
[
"Armeni",
"Iro",
""
]
] | Creating 3D semantic reconstructions of environments is fundamental to many applications, especially when related to autonomous agent operation (e.g., goal-oriented navigation or object interaction and manipulation). Commonly, 3D semantic reconstruction systems capture the entire scene in the same level of detail. However, certain tasks (e.g., object interaction) require a fine-grained and high-resolution map, particularly if the objects to interact are of small size or intricate geometry. In recent practice, this leads to the entire map being in the same high-quality resolution, which results in increased computational and storage costs. To address this challenge, we propose MAP-ADAPT, a real-time method for quality-adaptive semantic 3D reconstruction using RGBD frames. MAP-ADAPT is the first adaptive semantic 3D mapping algorithm that, unlike prior work, generates directly a single map with regions of different quality based on both the semantic information and the geometric complexity of the scene. Leveraging a semantic SLAM pipeline for pose and semantic estimation, we achieve comparable or superior results to state-of-the-art methods on synthetic and real-world data, while significantly reducing storage and computation requirements. |
2312.05436 | Xiao Ling | Xiao Ling, Tim Menzies, Christopher Hazard, Jack Shu, Jacob Beel | Trading Off Scalability, Privacy, and Performance in Data Synthesis | 13 pages, 2 figures, 6 tables, submitted to IEEEAccess | null | null | null | cs.SE | http://creativecommons.org/licenses/by/4.0/ | Synthetic data has been widely applied in the real world recently. One
typical example is the creation of synthetic data for privacy concerned
datasets. In this scenario, synthetic data substitute the real data which
contains the privacy information, and is used to public testing for machine
learning models. Another typical example is the unbalance data over-sampling
which the synthetic data is generated in the region of minority samples to
balance the positive and negative ratio when training the machine learning
models. In this study, we concentrate on the first example, and introduce (a)
the Howso engine, and (b) our proposed random projection based synthetic data
generation framework. We evaluate these two algorithms on the aspects of
privacy preservation and accuracy, and compare them to the two state-of-the-art
synthetic data generation algorithms DataSynthesizer and Synthetic Data Vault.
We show that the synthetic data generated by Howso engine has good privacy and
accuracy, which results the best overall score. On the other hand, our proposed
random projection based framework can generate synthetic data with highest
accuracy score, and has the fastest scalability.
| [
{
"created": "Sat, 9 Dec 2023 02:04:25 GMT",
"version": "v1"
}
] | 2023-12-12 | [
[
"Ling",
"Xiao",
""
],
[
"Menzies",
"Tim",
""
],
[
"Hazard",
"Christopher",
""
],
[
"Shu",
"Jack",
""
],
[
"Beel",
"Jacob",
""
]
] | Synthetic data has been widely applied in the real world recently. One typical example is the creation of synthetic data for privacy concerned datasets. In this scenario, synthetic data substitute the real data which contains the privacy information, and is used to public testing for machine learning models. Another typical example is the unbalance data over-sampling which the synthetic data is generated in the region of minority samples to balance the positive and negative ratio when training the machine learning models. In this study, we concentrate on the first example, and introduce (a) the Howso engine, and (b) our proposed random projection based synthetic data generation framework. We evaluate these two algorithms on the aspects of privacy preservation and accuracy, and compare them to the two state-of-the-art synthetic data generation algorithms DataSynthesizer and Synthetic Data Vault. We show that the synthetic data generated by Howso engine has good privacy and accuracy, which results the best overall score. On the other hand, our proposed random projection based framework can generate synthetic data with highest accuracy score, and has the fastest scalability. |
1905.10681 | Ahmed Qureshi | Ahmed H. Qureshi, Jacob J. Johnson, Yuzhe Qin, Taylor Henderson, Byron
Boots, Michael C. Yip | Composing Task-Agnostic Policies with Deep Reinforcement Learning | ICLR 2020 | null | null | null | cs.LG cs.AI cs.RO stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The composition of elementary behaviors to solve challenging transfer
learning problems is one of the key elements in building intelligent machines.
To date, there has been plenty of work on learning task-specific policies or
skills but almost no focus on composing necessary, task-agnostic skills to find
a solution to new problems. In this paper, we propose a novel deep
reinforcement learning-based skill transfer and composition method that takes
the agent's primitive policies to solve unseen tasks. We evaluate our method in
difficult cases where training policy through standard reinforcement learning
(RL) or even hierarchical RL is either not feasible or exhibits high sample
complexity. We show that our method not only transfers skills to new problem
settings but also solves the challenging environments requiring both task
planning and motion control with high data efficiency.
| [
{
"created": "Sat, 25 May 2019 21:40:38 GMT",
"version": "v1"
},
{
"created": "Mon, 30 Dec 2019 20:32:24 GMT",
"version": "v2"
}
] | 2020-01-01 | [
[
"Qureshi",
"Ahmed H.",
""
],
[
"Johnson",
"Jacob J.",
""
],
[
"Qin",
"Yuzhe",
""
],
[
"Henderson",
"Taylor",
""
],
[
"Boots",
"Byron",
""
],
[
"Yip",
"Michael C.",
""
]
] | The composition of elementary behaviors to solve challenging transfer learning problems is one of the key elements in building intelligent machines. To date, there has been plenty of work on learning task-specific policies or skills but almost no focus on composing necessary, task-agnostic skills to find a solution to new problems. In this paper, we propose a novel deep reinforcement learning-based skill transfer and composition method that takes the agent's primitive policies to solve unseen tasks. We evaluate our method in difficult cases where training policy through standard reinforcement learning (RL) or even hierarchical RL is either not feasible or exhibits high sample complexity. We show that our method not only transfers skills to new problem settings but also solves the challenging environments requiring both task planning and motion control with high data efficiency. |
2210.07442 | Minghua Liu | Minghua Liu, Xuanlin Li, Zhan Ling, Yangyan Li, Hao Su | Frame Mining: a Free Lunch for Learning Robotic Manipulation from 3D
Point Clouds | Conference on Robot Learning (CoRL) 2022; Project Website:
https://colin97.github.io/FrameMining/ | null | null | null | cs.RO cs.CV | http://creativecommons.org/licenses/by/4.0/ | We study how choices of input point cloud coordinate frames impact learning
of manipulation skills from 3D point clouds. There exist a variety of
coordinate frame choices to normalize captured robot-object-interaction point
clouds. We find that different frames have a profound effect on agent learning
performance, and the trend is similar across 3D backbone networks. In
particular, the end-effector frame and the target-part frame achieve higher
training efficiency than the commonly used world frame and robot-base frame in
many tasks, intuitively because they provide helpful alignments among point
clouds across time steps and thus can simplify visual module learning.
Moreover, the well-performing frames vary across tasks, and some tasks may
benefit from multiple frame candidates. We thus propose FrameMiners to
adaptively select candidate frames and fuse their merits in a task-agnostic
manner. Experimentally, FrameMiners achieves on-par or significantly higher
performance than the best single-frame version on five fully physical
manipulation tasks adapted from ManiSkill and OCRTOC. Without changing existing
camera placements or adding extra cameras, point cloud frame mining can serve
as a free lunch to improve 3D manipulation learning.
| [
{
"created": "Fri, 14 Oct 2022 01:05:44 GMT",
"version": "v1"
}
] | 2022-10-17 | [
[
"Liu",
"Minghua",
""
],
[
"Li",
"Xuanlin",
""
],
[
"Ling",
"Zhan",
""
],
[
"Li",
"Yangyan",
""
],
[
"Su",
"Hao",
""
]
] | We study how choices of input point cloud coordinate frames impact learning of manipulation skills from 3D point clouds. There exist a variety of coordinate frame choices to normalize captured robot-object-interaction point clouds. We find that different frames have a profound effect on agent learning performance, and the trend is similar across 3D backbone networks. In particular, the end-effector frame and the target-part frame achieve higher training efficiency than the commonly used world frame and robot-base frame in many tasks, intuitively because they provide helpful alignments among point clouds across time steps and thus can simplify visual module learning. Moreover, the well-performing frames vary across tasks, and some tasks may benefit from multiple frame candidates. We thus propose FrameMiners to adaptively select candidate frames and fuse their merits in a task-agnostic manner. Experimentally, FrameMiners achieves on-par or significantly higher performance than the best single-frame version on five fully physical manipulation tasks adapted from ManiSkill and OCRTOC. Without changing existing camera placements or adding extra cameras, point cloud frame mining can serve as a free lunch to improve 3D manipulation learning. |
1602.01925 | Waleed Ammar | Waleed Ammar, George Mulcaire, Yulia Tsvetkov, Guillaume Lample, Chris
Dyer, Noah A. Smith | Massively Multilingual Word Embeddings | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce new methods for estimating and evaluating embeddings of words in
more than fifty languages in a single shared embedding space. Our estimation
methods, multiCluster and multiCCA, use dictionaries and monolingual data; they
do not require parallel data. Our new evaluation method, multiQVEC-CCA, is
shown to correlate better than previous ones with two downstream tasks (text
categorization and parsing). We also describe a web portal for evaluation that
will facilitate further research in this area, along with open-source releases
of all our methods.
| [
{
"created": "Fri, 5 Feb 2016 04:26:38 GMT",
"version": "v1"
},
{
"created": "Sat, 21 May 2016 08:08:21 GMT",
"version": "v2"
}
] | 2016-05-24 | [
[
"Ammar",
"Waleed",
""
],
[
"Mulcaire",
"George",
""
],
[
"Tsvetkov",
"Yulia",
""
],
[
"Lample",
"Guillaume",
""
],
[
"Dyer",
"Chris",
""
],
[
"Smith",
"Noah A.",
""
]
] | We introduce new methods for estimating and evaluating embeddings of words in more than fifty languages in a single shared embedding space. Our estimation methods, multiCluster and multiCCA, use dictionaries and monolingual data; they do not require parallel data. Our new evaluation method, multiQVEC-CCA, is shown to correlate better than previous ones with two downstream tasks (text categorization and parsing). We also describe a web portal for evaluation that will facilitate further research in this area, along with open-source releases of all our methods. |
1705.01507 | Joerg Evermann | Joerg Evermann and Jana-Rebecca Rehse and Peter Fettke | XES Tensorflow - Process Prediction using the Tensorflow Deep-Learning
Framework | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Predicting the next activity of a running process is an important aspect of
process management. Recently, artificial neural networks, so called
deep-learning approaches, have been proposed to address this challenge. This
demo paper describes a software application that applies the Tensorflow
deep-learning framework to process prediction. The software application reads
industry-standard XES files for training and presents the user with an
easy-to-use graphical user interface for both training and prediction. The
system provides several improvements over earlier work. This demo paper focuses
on the software implementation and describes the architecture and user
interface.
| [
{
"created": "Wed, 3 May 2017 16:48:51 GMT",
"version": "v1"
}
] | 2017-05-04 | [
[
"Evermann",
"Joerg",
""
],
[
"Rehse",
"Jana-Rebecca",
""
],
[
"Fettke",
"Peter",
""
]
] | Predicting the next activity of a running process is an important aspect of process management. Recently, artificial neural networks, so called deep-learning approaches, have been proposed to address this challenge. This demo paper describes a software application that applies the Tensorflow deep-learning framework to process prediction. The software application reads industry-standard XES files for training and presents the user with an easy-to-use graphical user interface for both training and prediction. The system provides several improvements over earlier work. This demo paper focuses on the software implementation and describes the architecture and user interface. |
1611.08315 | Sandor P. Fekete | S\'andor P. Fekete and Qian Li and Joseph S. B. Mitchell and Christian
Scheffer | Universal Guard Problems | 28 pages, 19 figures, full version of extended abstract that appeared
in the 27th International Symposium on Algorithms and Computation (ISAAC
2016), 32:1-32:13 | null | null | null | cs.CG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We provide a spectrum of results for the Universal Guard Problem, in which
one is to obtain a small set of points ("guards") that are "universal" in their
ability to guard any of a set of possible polygonal domains in the plane. We
give upper and lower bounds on the number of universal guards that are always
sufficient to guard all polygons having a given set of n vertices, or to guard
all polygons in a given set of k polygons on an n-point vertex set. Our upper
bound proofs include algorithms to construct universal guard sets of the
respective cardinalities.
| [
{
"created": "Thu, 24 Nov 2016 21:38:59 GMT",
"version": "v1"
},
{
"created": "Mon, 27 Mar 2017 07:19:12 GMT",
"version": "v2"
}
] | 2017-03-28 | [
[
"Fekete",
"Sándor P.",
""
],
[
"Li",
"Qian",
""
],
[
"Mitchell",
"Joseph S. B.",
""
],
[
"Scheffer",
"Christian",
""
]
] | We provide a spectrum of results for the Universal Guard Problem, in which one is to obtain a small set of points ("guards") that are "universal" in their ability to guard any of a set of possible polygonal domains in the plane. We give upper and lower bounds on the number of universal guards that are always sufficient to guard all polygons having a given set of n vertices, or to guard all polygons in a given set of k polygons on an n-point vertex set. Our upper bound proofs include algorithms to construct universal guard sets of the respective cardinalities. |
2406.02898 | Xiao Zheng | Xiao Zheng, Wenchi Cheng, Jingqing Wang, Wei Zhang | Location-Driven Beamforming for RIS-Assisted Near-Field Communications | 7 pages, 6 figures, accepted by IEEE Communication Magazine | null | null | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Future wireless communications are promising to support ubiquitous
connections and high data rates with cost-effective devices. Benefiting from
the energy-efficient elements with low cost, reconfigurable intelligent surface
(RIS) emerges as a potential solution to fulfill such demands, which has the
capability to flexibly manipulate the wireless signals with a tunable phase.
Recently, as the operational frequency ascends to the sub-terahertz (THz) bands
or higher bands for wireless communications in six-generation (6G), it becomes
imperative to consider the near-field propagation in RIS-assisted
communications. The challenging acquisition of channel parameters is an
inherent issue for near-field RIS-assisted communications, where the complex
design is essential to acquire the informative near-field channel embedded with
both the angle and distance information. Hence, in this paper we systematically
investigate the potential of exploiting location information for near-field
RIS-assisted communications. Firstly, we present the progresses in the
near-field RIS-assisted communications, which are compatible with existing
wireless communications and show the potential to achieve the fine-grained
localization accuracy to support location-driven scheme. Then, the Fresnel zone
based model is introduced, with which the location-driven beamforming scheme
and corresponding frame structure are developed. Also, we elaborate on four
unique advantages for leveraging location information in RIS-assisted
communications, followed by numerical simulations. Finally, several key
challenges and corresponding potential solutions are pointed out.
| [
{
"created": "Wed, 5 Jun 2024 03:36:38 GMT",
"version": "v1"
},
{
"created": "Thu, 18 Jul 2024 02:47:59 GMT",
"version": "v2"
}
] | 2024-07-19 | [
[
"Zheng",
"Xiao",
""
],
[
"Cheng",
"Wenchi",
""
],
[
"Wang",
"Jingqing",
""
],
[
"Zhang",
"Wei",
""
]
] | Future wireless communications are promising to support ubiquitous connections and high data rates with cost-effective devices. Benefiting from the energy-efficient elements with low cost, reconfigurable intelligent surface (RIS) emerges as a potential solution to fulfill such demands, which has the capability to flexibly manipulate the wireless signals with a tunable phase. Recently, as the operational frequency ascends to the sub-terahertz (THz) bands or higher bands for wireless communications in six-generation (6G), it becomes imperative to consider the near-field propagation in RIS-assisted communications. The challenging acquisition of channel parameters is an inherent issue for near-field RIS-assisted communications, where the complex design is essential to acquire the informative near-field channel embedded with both the angle and distance information. Hence, in this paper we systematically investigate the potential of exploiting location information for near-field RIS-assisted communications. Firstly, we present the progresses in the near-field RIS-assisted communications, which are compatible with existing wireless communications and show the potential to achieve the fine-grained localization accuracy to support location-driven scheme. Then, the Fresnel zone based model is introduced, with which the location-driven beamforming scheme and corresponding frame structure are developed. Also, we elaborate on four unique advantages for leveraging location information in RIS-assisted communications, followed by numerical simulations. Finally, several key challenges and corresponding potential solutions are pointed out. |
1805.05345 | Cristian Danescu-Niculescu-Mizil | Justine Zhang, Jonathan P. Chang, Cristian Danescu-Niculescu-Mizil,
Lucas Dixon, Yiqing Hua, Nithum Thain, Dario Taraborelli | Conversations Gone Awry: Detecting Early Signs of Conversational Failure | To appear in the Proceedings of ACL 2018, 15 pages, 1 figure. Data,
quiz, code and additional information at
http://www.cs.cornell.edu/~cristian/Conversations_gone_awry.html | null | null | null | cs.CL cs.AI cs.CY cs.HC physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | One of the main challenges online social systems face is the prevalence of
antisocial behavior, such as harassment and personal attacks. In this work, we
introduce the task of predicting from the very start of a conversation whether
it will get out of hand. As opposed to detecting undesirable behavior after the
fact, this task aims to enable early, actionable prediction at a time when the
conversation might still be salvaged.
To this end, we develop a framework for capturing pragmatic devices---such as
politeness strategies and rhetorical prompts---used to start a conversation,
and analyze their relation to its future trajectory. Applying this framework in
a controlled setting, we demonstrate the feasibility of detecting early warning
signs of antisocial behavior in online discussions.
| [
{
"created": "Mon, 14 May 2018 18:00:03 GMT",
"version": "v1"
}
] | 2018-05-16 | [
[
"Zhang",
"Justine",
""
],
[
"Chang",
"Jonathan P.",
""
],
[
"Danescu-Niculescu-Mizil",
"Cristian",
""
],
[
"Dixon",
"Lucas",
""
],
[
"Hua",
"Yiqing",
""
],
[
"Thain",
"Nithum",
""
],
[
"Taraborelli",
"Dario",
""
]
] | One of the main challenges online social systems face is the prevalence of antisocial behavior, such as harassment and personal attacks. In this work, we introduce the task of predicting from the very start of a conversation whether it will get out of hand. As opposed to detecting undesirable behavior after the fact, this task aims to enable early, actionable prediction at a time when the conversation might still be salvaged. To this end, we develop a framework for capturing pragmatic devices---such as politeness strategies and rhetorical prompts---used to start a conversation, and analyze their relation to its future trajectory. Applying this framework in a controlled setting, we demonstrate the feasibility of detecting early warning signs of antisocial behavior in online discussions. |
1807.11655 | Ho Bae | Ho Bae, Jaehee Jang, Dahuin Jung, Hyemi Jang, Heonseok Ha, Hyungyu
Lee, Sungroh Yoon | Security and Privacy Issues in Deep Learning | null | null | null | null | cs.CR cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | To promote secure and private artificial intelligence (SPAI), we review
studies on the model security and data privacy of DNNs. Model security allows
system to behave as intended without being affected by malicious external
influences that can compromise its integrity and efficiency. Security attacks
can be divided based on when they occur: if an attack occurs during training,
it is known as a poisoning attack, and if it occurs during inference (after
training) it is termed an evasion attack. Poisoning attacks compromise the
training process by corrupting the data with malicious examples, while evasion
attacks use adversarial examples to disrupt entire classification process.
Defenses proposed against such attacks include techniques to recognize and
remove malicious data, train a model to be insensitive to such data, and mask
the model's structure and parameters to render attacks more challenging to
implement. Furthermore, the privacy of the data involved in model training is
also threatened by attacks such as the model-inversion attack, or by dishonest
service providers of AI applications. To maintain data privacy, several
solutions that combine existing data-privacy techniques have been proposed,
including differential privacy and modern cryptography techniques. In this
paper, we describe the notions of some of methods, e.g., homomorphic
encryption, and review their advantages and challenges when implemented in
deep-learning models.
| [
{
"created": "Tue, 31 Jul 2018 04:18:26 GMT",
"version": "v1"
},
{
"created": "Thu, 6 Dec 2018 07:35:31 GMT",
"version": "v2"
},
{
"created": "Sat, 23 Nov 2019 17:25:45 GMT",
"version": "v3"
},
{
"created": "Wed, 10 Mar 2021 00:55:18 GMT",
"version": "v4"
}
] | 2021-03-11 | [
[
"Bae",
"Ho",
""
],
[
"Jang",
"Jaehee",
""
],
[
"Jung",
"Dahuin",
""
],
[
"Jang",
"Hyemi",
""
],
[
"Ha",
"Heonseok",
""
],
[
"Lee",
"Hyungyu",
""
],
[
"Yoon",
"Sungroh",
""
]
] | To promote secure and private artificial intelligence (SPAI), we review studies on the model security and data privacy of DNNs. Model security allows system to behave as intended without being affected by malicious external influences that can compromise its integrity and efficiency. Security attacks can be divided based on when they occur: if an attack occurs during training, it is known as a poisoning attack, and if it occurs during inference (after training) it is termed an evasion attack. Poisoning attacks compromise the training process by corrupting the data with malicious examples, while evasion attacks use adversarial examples to disrupt entire classification process. Defenses proposed against such attacks include techniques to recognize and remove malicious data, train a model to be insensitive to such data, and mask the model's structure and parameters to render attacks more challenging to implement. Furthermore, the privacy of the data involved in model training is also threatened by attacks such as the model-inversion attack, or by dishonest service providers of AI applications. To maintain data privacy, several solutions that combine existing data-privacy techniques have been proposed, including differential privacy and modern cryptography techniques. In this paper, we describe the notions of some of methods, e.g., homomorphic encryption, and review their advantages and challenges when implemented in deep-learning models. |
2009.01195 | Avishek Garain | Avishek Garain | Garain at SemEval-2020 Task 12: Sequence based Deep Learning for
Categorizing Offensive Language in Social Media | Preprint for SemEval-2020 Task 12 System description paper, 8 pages,
3 figures | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | SemEval-2020 Task 12 was OffenseEval: Multilingual Offensive Language
Identification in Social Media (Zampieri et al., 2020). The task was subdivided
into multiple languages and datasets were provided for each one. The task was
further divided into three sub-tasks: offensive language identification,
automatic categorization of offense types, and offense target identification. I
have participated in the task-C, that is, offense target identification. For
preparing the proposed system, I have made use of Deep Learning networks like
LSTMs and frameworks like Keras which combine the bag of words model with
automatically generated sequence based features and manually extracted features
from the given dataset. My system on training on 25% of the whole dataset
achieves macro averaged f1 score of 47.763%.
| [
{
"created": "Wed, 2 Sep 2020 17:09:29 GMT",
"version": "v1"
}
] | 2020-09-03 | [
[
"Garain",
"Avishek",
""
]
] | SemEval-2020 Task 12 was OffenseEval: Multilingual Offensive Language Identification in Social Media (Zampieri et al., 2020). The task was subdivided into multiple languages and datasets were provided for each one. The task was further divided into three sub-tasks: offensive language identification, automatic categorization of offense types, and offense target identification. I have participated in the task-C, that is, offense target identification. For preparing the proposed system, I have made use of Deep Learning networks like LSTMs and frameworks like Keras which combine the bag of words model with automatically generated sequence based features and manually extracted features from the given dataset. My system on training on 25% of the whole dataset achieves macro averaged f1 score of 47.763%. |
1104.3348 | Nisarg Shah | Krishnendu Chatterjee and Monika Henzinger and Manas Joglekar and
Nisarg Shah | Symbolic Algorithms for Qualitative Analysis of Markov Decision
Processes with B\"uchi Objectives | null | In Formal Methods in System Design, 42(3):301-327, 2013 | null | null | cs.GT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider Markov decision processes (MDPs) with \omega-regular
specifications given as parity objectives. We consider the problem of computing
the set of almost-sure winning states from where the objective can be ensured
with probability 1. The algorithms for the computation of the almost-sure
winning set for parity objectives iteratively use the solutions for the
almost-sure winning set for B\"uchi objectives (a special case of parity
objectives). Our contributions are as follows: First, we present the first
subquadratic symbolic algorithm to compute the almost-sure winning set for MDPs
with B\"uchi objectives; our algorithm takes O(n \sqrt{m}) symbolic steps as
compared to the previous known algorithm that takes O(n^2) symbolic steps,
where $n$ is the number of states and $m$ is the number of edges of the MDP. In
practice MDPs have constant out-degree, and then our symbolic algorithm takes
O(n \sqrt{n}) symbolic steps, as compared to the previous known $O(n^2)$
symbolic steps algorithm. Second, we present a new algorithm, namely win-lose
algorithm, with the following two properties: (a) the algorithm iteratively
computes subsets of the almost-sure winning set and its complement, as compared
to all previous algorithms that discover the almost-sure winning set upon
termination; and (b) requires O(n \sqrt{K}) symbolic steps, where K is the
maximal number of edges of strongly connected components (scc's) of the MDP.
The win-lose algorithm requires symbolic computation of scc's. Third, we
improve the algorithm for symbolic scc computation; the previous known
algorithm takes linear symbolic steps, and our new algorithm improves the
constants associated with the linear number of steps. In the worst case the
previous known algorithm takes 5n symbolic steps, whereas our new algorithm
takes 4n symbolic steps.
| [
{
"created": "Sun, 17 Apr 2011 20:47:42 GMT",
"version": "v1"
},
{
"created": "Wed, 19 Nov 2014 20:48:12 GMT",
"version": "v2"
}
] | 2014-11-20 | [
[
"Chatterjee",
"Krishnendu",
""
],
[
"Henzinger",
"Monika",
""
],
[
"Joglekar",
"Manas",
""
],
[
"Shah",
"Nisarg",
""
]
] | We consider Markov decision processes (MDPs) with \omega-regular specifications given as parity objectives. We consider the problem of computing the set of almost-sure winning states from where the objective can be ensured with probability 1. The algorithms for the computation of the almost-sure winning set for parity objectives iteratively use the solutions for the almost-sure winning set for B\"uchi objectives (a special case of parity objectives). Our contributions are as follows: First, we present the first subquadratic symbolic algorithm to compute the almost-sure winning set for MDPs with B\"uchi objectives; our algorithm takes O(n \sqrt{m}) symbolic steps as compared to the previous known algorithm that takes O(n^2) symbolic steps, where $n$ is the number of states and $m$ is the number of edges of the MDP. In practice MDPs have constant out-degree, and then our symbolic algorithm takes O(n \sqrt{n}) symbolic steps, as compared to the previous known $O(n^2)$ symbolic steps algorithm. Second, we present a new algorithm, namely win-lose algorithm, with the following two properties: (a) the algorithm iteratively computes subsets of the almost-sure winning set and its complement, as compared to all previous algorithms that discover the almost-sure winning set upon termination; and (b) requires O(n \sqrt{K}) symbolic steps, where K is the maximal number of edges of strongly connected components (scc's) of the MDP. The win-lose algorithm requires symbolic computation of scc's. Third, we improve the algorithm for symbolic scc computation; the previous known algorithm takes linear symbolic steps, and our new algorithm improves the constants associated with the linear number of steps. In the worst case the previous known algorithm takes 5n symbolic steps, whereas our new algorithm takes 4n symbolic steps. |
2302.04116 | Yujin Huang | Yujin Huang, Terry Yue Zhuo, Qiongkai Xu, Han Hu, Xingliang Yuan,
Chunyang Chen | Training-free Lexical Backdoor Attacks on Language Models | Accepted to International World Wide Web Conference 2023, Security,
Privacy & Trust Track | null | 10.1145/3543507.3583348 | null | cs.CR cs.AI cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large-scale language models have achieved tremendous success across various
natural language processing (NLP) applications. Nevertheless, language models
are vulnerable to backdoor attacks, which inject stealthy triggers into models
for steering them to undesirable behaviors. Most existing backdoor attacks,
such as data poisoning, require further (re)training or fine-tuning language
models to learn the intended backdoor patterns. The additional training process
however diminishes the stealthiness of the attacks, as training a language
model usually requires long optimization time, a massive amount of data, and
considerable modifications to the model parameters. In this work, we propose
Training-Free Lexical Backdoor Attack (TFLexAttack) as the first training-free
backdoor attack on language models. Our attack is achieved by injecting lexical
triggers into the tokenizer of a language model via manipulating its embedding
dictionary using carefully designed rules. These rules are explainable to human
developers which inspires attacks from a wider range of hackers. The sparse
manipulation of the dictionary also habilitates the stealthiness of our attack.
We conduct extensive experiments on three dominant NLP tasks based on nine
language models to demonstrate the effectiveness and universality of our
attack. The code of this work is available at
https://github.com/Jinxhy/TFLexAttack.
| [
{
"created": "Wed, 8 Feb 2023 15:18:51 GMT",
"version": "v1"
}
] | 2023-02-09 | [
[
"Huang",
"Yujin",
""
],
[
"Zhuo",
"Terry Yue",
""
],
[
"Xu",
"Qiongkai",
""
],
[
"Hu",
"Han",
""
],
[
"Yuan",
"Xingliang",
""
],
[
"Chen",
"Chunyang",
""
]
] | Large-scale language models have achieved tremendous success across various natural language processing (NLP) applications. Nevertheless, language models are vulnerable to backdoor attacks, which inject stealthy triggers into models for steering them to undesirable behaviors. Most existing backdoor attacks, such as data poisoning, require further (re)training or fine-tuning language models to learn the intended backdoor patterns. The additional training process however diminishes the stealthiness of the attacks, as training a language model usually requires long optimization time, a massive amount of data, and considerable modifications to the model parameters. In this work, we propose Training-Free Lexical Backdoor Attack (TFLexAttack) as the first training-free backdoor attack on language models. Our attack is achieved by injecting lexical triggers into the tokenizer of a language model via manipulating its embedding dictionary using carefully designed rules. These rules are explainable to human developers which inspires attacks from a wider range of hackers. The sparse manipulation of the dictionary also habilitates the stealthiness of our attack. We conduct extensive experiments on three dominant NLP tasks based on nine language models to demonstrate the effectiveness and universality of our attack. The code of this work is available at https://github.com/Jinxhy/TFLexAttack. |
1304.1881 | J\'er\'emie Lumbroso | Olivier Bodini and J\'er\'emie Lumbroso and Nicolas Rolin | Analytic Samplers and the Combinatorial Rejection Method | accepted at ANALCO 2015, 11 pages, 7 figures | null | null | null | cs.DM cs.DS math.CO math.PR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Boltzmann samplers, introduced by Duchon et al. in 2001, make it possible to
uniformly draw approximate size objects from any class which can be specified
through the symbolic method. This, through by evaluating the associated
generating functions to obtain the correct branching probabilities.
But these samplers require generating functions, in particular in the
neighborhood of their sunglarity, which is a complex problem; they also require
picking an appropriate tuning value to best control the size of generated
objects. Although Pivoteau~\etal have brought a sweeping question to the first
question, with the introduction of their Newton oracle, questions remain.
By adapting the rejection method, a classical tool from the random, we show
how to obtain a variant of the Boltzmann sampler framework, which is tolerant
of approximation, even large ones. Our goal for this is twofold: this allows
for exact sampling with approximate values; but this also allows much more
flexibility in tuning samplers. For the class of simple trees, we will try to
show how this could be used to more easily calibrate samplers.
| [
{
"created": "Sat, 6 Apr 2013 11:48:33 GMT",
"version": "v1"
},
{
"created": "Mon, 8 Sep 2014 18:17:33 GMT",
"version": "v2"
},
{
"created": "Thu, 13 Nov 2014 02:09:58 GMT",
"version": "v3"
}
] | 2014-11-14 | [
[
"Bodini",
"Olivier",
""
],
[
"Lumbroso",
"Jérémie",
""
],
[
"Rolin",
"Nicolas",
""
]
] | Boltzmann samplers, introduced by Duchon et al. in 2001, make it possible to uniformly draw approximate size objects from any class which can be specified through the symbolic method. This, through by evaluating the associated generating functions to obtain the correct branching probabilities. But these samplers require generating functions, in particular in the neighborhood of their sunglarity, which is a complex problem; they also require picking an appropriate tuning value to best control the size of generated objects. Although Pivoteau~\etal have brought a sweeping question to the first question, with the introduction of their Newton oracle, questions remain. By adapting the rejection method, a classical tool from the random, we show how to obtain a variant of the Boltzmann sampler framework, which is tolerant of approximation, even large ones. Our goal for this is twofold: this allows for exact sampling with approximate values; but this also allows much more flexibility in tuning samplers. For the class of simple trees, we will try to show how this could be used to more easily calibrate samplers. |
2306.00431 | Jovan Komatovic | Pierre Civit, Seth Gilbert, Rachid Guerraoui, Jovan Komatovic, Matteo
Monti, Manuel Vidigueira | Every Bit Counts in Consensus | null | null | null | null | cs.DC | http://creativecommons.org/licenses/by/4.0/ | Consensus enables n processes to agree on a common valid L-bit value, despite
t < n/3 processes being faulty and acting arbitrarily. A long line of work has
been dedicated to improving the worst-case communication complexity of
consensus in partial synchrony. This has recently culminated in the worst-case
word complexity of O(n^2). However, the worst-case bit complexity of the best
solution is still O(n^2 L + n^2 kappa) (where kappa is the security parameter),
far from the \Omega(n L + n^2) lower bound. The gap is significant given the
practical use of consensus primitives, where values typically consist of
batches of large size (L > n).
This paper shows how to narrow the aforementioned gap while achieving optimal
linear latency. Namely, we present a new algorithm, DARE (Disperse, Agree,
REtrieve), that improves upon the O(n^2 L) term via a novel dispersal
primitive. DARE achieves O(n^{1.5} L + n^{2.5} kappa) bit complexity, an
effective sqrt{n}-factor improvement over the state-of-the-art (when L > n
kappa). Moreover, we show that employing heavier cryptographic primitives,
namely STARK proofs, allows us to devise DARE-Stark, a version of DARE which
achieves the near-optimal bit complexity of O(n L + n^2 poly(kappa)). Both DARE
and DARE-Stark achieve optimal O(n) latency.
| [
{
"created": "Thu, 1 Jun 2023 08:18:16 GMT",
"version": "v1"
},
{
"created": "Mon, 7 Aug 2023 12:32:41 GMT",
"version": "v2"
}
] | 2023-08-09 | [
[
"Civit",
"Pierre",
""
],
[
"Gilbert",
"Seth",
""
],
[
"Guerraoui",
"Rachid",
""
],
[
"Komatovic",
"Jovan",
""
],
[
"Monti",
"Matteo",
""
],
[
"Vidigueira",
"Manuel",
""
]
] | Consensus enables n processes to agree on a common valid L-bit value, despite t < n/3 processes being faulty and acting arbitrarily. A long line of work has been dedicated to improving the worst-case communication complexity of consensus in partial synchrony. This has recently culminated in the worst-case word complexity of O(n^2). However, the worst-case bit complexity of the best solution is still O(n^2 L + n^2 kappa) (where kappa is the security parameter), far from the \Omega(n L + n^2) lower bound. The gap is significant given the practical use of consensus primitives, where values typically consist of batches of large size (L > n). This paper shows how to narrow the aforementioned gap while achieving optimal linear latency. Namely, we present a new algorithm, DARE (Disperse, Agree, REtrieve), that improves upon the O(n^2 L) term via a novel dispersal primitive. DARE achieves O(n^{1.5} L + n^{2.5} kappa) bit complexity, an effective sqrt{n}-factor improvement over the state-of-the-art (when L > n kappa). Moreover, we show that employing heavier cryptographic primitives, namely STARK proofs, allows us to devise DARE-Stark, a version of DARE which achieves the near-optimal bit complexity of O(n L + n^2 poly(kappa)). Both DARE and DARE-Stark achieve optimal O(n) latency. |
2207.11707 | Sungha Choi | Sungha Choi, Seunghan Yang, Seokeon Choi, Sungrack Yun | Improving Test-Time Adaptation via Shift-agnostic Weight Regularization
and Nearest Source Prototypes | Accepted to ECCV 2022 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | This paper proposes a novel test-time adaptation strategy that adjusts the
model pre-trained on the source domain using only unlabeled online data from
the target domain to alleviate the performance degradation due to the
distribution shift between the source and target domains. Adapting the entire
model parameters using the unlabeled online data may be detrimental due to the
erroneous signals from an unsupervised objective. To mitigate this problem, we
propose a shift-agnostic weight regularization that encourages largely updating
the model parameters sensitive to distribution shift while slightly updating
those insensitive to the shift, during test-time adaptation. This
regularization enables the model to quickly adapt to the target domain without
performance degradation by utilizing the benefit of a high learning rate. In
addition, we present an auxiliary task based on nearest source prototypes to
align the source and target features, which helps reduce the distribution shift
and leads to further performance improvement. We show that our method exhibits
state-of-the-art performance on various standard benchmarks and even
outperforms its supervised counterpart.
| [
{
"created": "Sun, 24 Jul 2022 10:17:05 GMT",
"version": "v1"
}
] | 2022-07-26 | [
[
"Choi",
"Sungha",
""
],
[
"Yang",
"Seunghan",
""
],
[
"Choi",
"Seokeon",
""
],
[
"Yun",
"Sungrack",
""
]
] | This paper proposes a novel test-time adaptation strategy that adjusts the model pre-trained on the source domain using only unlabeled online data from the target domain to alleviate the performance degradation due to the distribution shift between the source and target domains. Adapting the entire model parameters using the unlabeled online data may be detrimental due to the erroneous signals from an unsupervised objective. To mitigate this problem, we propose a shift-agnostic weight regularization that encourages largely updating the model parameters sensitive to distribution shift while slightly updating those insensitive to the shift, during test-time adaptation. This regularization enables the model to quickly adapt to the target domain without performance degradation by utilizing the benefit of a high learning rate. In addition, we present an auxiliary task based on nearest source prototypes to align the source and target features, which helps reduce the distribution shift and leads to further performance improvement. We show that our method exhibits state-of-the-art performance on various standard benchmarks and even outperforms its supervised counterpart. |
1302.4973 | Christopher Meek | Christopher Meek | Strong Completeness and Faithfulness in Bayesian Networks | Appears in Proceedings of the Eleventh Conference on Uncertainty in
Artificial Intelligence (UAI1995) | null | null | UAI-P-1995-PG-411-418 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A completeness result for d-separation applied to discrete Bayesian networks
is presented and it is shown that in a strong measure-theoretic sense almost
all discrete distributions for a given network structure are faithful; i.e. the
independence facts true of the distribution are all and only those entailed by
the network structure.
| [
{
"created": "Wed, 20 Feb 2013 15:22:46 GMT",
"version": "v1"
}
] | 2013-02-21 | [
[
"Meek",
"Christopher",
""
]
] | A completeness result for d-separation applied to discrete Bayesian networks is presented and it is shown that in a strong measure-theoretic sense almost all discrete distributions for a given network structure are faithful; i.e. the independence facts true of the distribution are all and only those entailed by the network structure. |
2012.03449 | Zhaoting Li | Zhaoting Li, Jiankun Wang and Max Q.-H. Meng | Efficient Heuristic Generation for Robot Path Planning with Recurrent
Generative Model | null | null | null | null | cs.RO cs.AI | http://creativecommons.org/licenses/by/4.0/ | Robot path planning is difficult to solve due to the contradiction between
optimality of results and complexity of algorithms, even in 2D environments. To
find an optimal path, the algorithm needs to search all the state space, which
costs a lot of computation resource. To address this issue, we present a novel
recurrent generative model (RGM) which generates efficient heuristic to reduce
the search efforts of path planning algorithm. This RGM model adopts the
framework of general generative adversarial networks (GAN), which consists of a
novel generator that can generate heuristic by refining the outputs recurrently
and two discriminators that check the connectivity and safety properties of
heuristic. We test the proposed RGM module in various 2D environments to
demonstrate its effectiveness and efficiency. The results show that the RGM
successfully generates appropriate heuristic in both seen and new unseen maps
with a high accuracy, demonstrating the good generalization ability of this
model. We also compare the rapidly-exploring random tree star (RRT*) with
generated heuristic and the conventional RRT* in four different maps, showing
that the generated heuristic can guide the algorithm to find both initial and
optimal solution in a faster and more efficient way.
| [
{
"created": "Mon, 7 Dec 2020 05:03:03 GMT",
"version": "v1"
}
] | 2020-12-08 | [
[
"Li",
"Zhaoting",
""
],
[
"Wang",
"Jiankun",
""
],
[
"Meng",
"Max Q. -H.",
""
]
] | Robot path planning is difficult to solve due to the contradiction between optimality of results and complexity of algorithms, even in 2D environments. To find an optimal path, the algorithm needs to search all the state space, which costs a lot of computation resource. To address this issue, we present a novel recurrent generative model (RGM) which generates efficient heuristic to reduce the search efforts of path planning algorithm. This RGM model adopts the framework of general generative adversarial networks (GAN), which consists of a novel generator that can generate heuristic by refining the outputs recurrently and two discriminators that check the connectivity and safety properties of heuristic. We test the proposed RGM module in various 2D environments to demonstrate its effectiveness and efficiency. The results show that the RGM successfully generates appropriate heuristic in both seen and new unseen maps with a high accuracy, demonstrating the good generalization ability of this model. We also compare the rapidly-exploring random tree star (RRT*) with generated heuristic and the conventional RRT* in four different maps, showing that the generated heuristic can guide the algorithm to find both initial and optimal solution in a faster and more efficient way. |
1907.04383 | Joshua Brakensiek | Joshua Brakensiek, Venkatesan Guruswami, Marcin Wrochna, and Stanislav
\v{Z}ivn\'y | The Power of the Combined Basic LP and Affine Relaxation for Promise
CSPs | 17 pages, to appear in SICOMP | SIAM Journal on Computing 49(6) (2020) 1232-1248 | 10.1137/20M1312745 | null | cs.DS cs.CC cs.LO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the field of constraint satisfaction problems (CSP), promise CSPs are an
exciting new direction of study. In a promise CSP, each constraint comes in two
forms: "strict" and "weak," and in the associated decision problem one must
distinguish between being able to satisfy all the strict constraints versus not
being able to satisfy all the weak constraints. The most commonly cited example
of a promise CSP is the approximate graph coloring problem--which has recently
seen exciting progress [BKO19, WZ20] benefiting from a systematic algebraic
approach to promise CSPs based on "polymorphisms," operations that map tuples
in the strict form of each constraint to tuples in the corresponding weak form.
In this work, we present a simple algorithm which in polynomial time solves
the decision problem for all promise CSPs that admit infinitely many symmetric
polymorphisms, which are invariant under arbitrary coordinate permutations.
This generalizes previous work of the first two authors [BG19]. We also extend
this algorithm to a more general class of block-symmetric polymorphisms. As a
corollary, this single algorithm solves all polynomial-time tractable Boolean
CSPs simultaneously. These results give a new perspective on Schaefer's classic
dichotomy theorem and shed further light on how symmetries of polymorphisms
enable algorithms. Finally, we show that block symmetric polymorphisms are not
only sufficient but also necessary for this algorithm to work, thus
establishing its precise power
| [
{
"created": "Tue, 9 Jul 2019 19:54:36 GMT",
"version": "v1"
},
{
"created": "Thu, 16 Jan 2020 21:35:32 GMT",
"version": "v2"
},
{
"created": "Fri, 18 Sep 2020 14:35:12 GMT",
"version": "v3"
}
] | 2020-12-03 | [
[
"Brakensiek",
"Joshua",
""
],
[
"Guruswami",
"Venkatesan",
""
],
[
"Wrochna",
"Marcin",
""
],
[
"Živný",
"Stanislav",
""
]
] | In the field of constraint satisfaction problems (CSP), promise CSPs are an exciting new direction of study. In a promise CSP, each constraint comes in two forms: "strict" and "weak," and in the associated decision problem one must distinguish between being able to satisfy all the strict constraints versus not being able to satisfy all the weak constraints. The most commonly cited example of a promise CSP is the approximate graph coloring problem--which has recently seen exciting progress [BKO19, WZ20] benefiting from a systematic algebraic approach to promise CSPs based on "polymorphisms," operations that map tuples in the strict form of each constraint to tuples in the corresponding weak form. In this work, we present a simple algorithm which in polynomial time solves the decision problem for all promise CSPs that admit infinitely many symmetric polymorphisms, which are invariant under arbitrary coordinate permutations. This generalizes previous work of the first two authors [BG19]. We also extend this algorithm to a more general class of block-symmetric polymorphisms. As a corollary, this single algorithm solves all polynomial-time tractable Boolean CSPs simultaneously. These results give a new perspective on Schaefer's classic dichotomy theorem and shed further light on how symmetries of polymorphisms enable algorithms. Finally, we show that block symmetric polymorphisms are not only sufficient but also necessary for this algorithm to work, thus establishing its precise power |
2103.14417 | Elena Burceanu | Emanuela Haller, Elena Burceanu, Marius Leordeanu | Self-Supervised Learning in Multi-Task Graphs through Iterative
Consensus Shift | Accepted at The British Machine Vision Conference (BMVC) 2021, 12
pages, 6 figures, 5 tables | null | null | null | cs.LG cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The human ability to synchronize the feedback from all their senses inspired
recent works in multi-task and multi-modal learning. While these works rely on
expensive supervision, our multi-task graph requires only pseudo-labels from
expert models. Every graph node represents a task, and each edge learns between
tasks transformations. Once initialized, the graph learns self-supervised,
based on a novel consensus shift algorithm that intelligently exploits the
agreement between graph pathways to generate new pseudo-labels for the next
learning cycle. We demonstrate significant improvement from one unsupervised
learning iteration to the next, outperforming related recent methods in
extensive multi-task learning experiments on two challenging datasets. Our code
is available at https://github.com/bit-ml/cshift.
| [
{
"created": "Fri, 26 Mar 2021 11:57:42 GMT",
"version": "v1"
},
{
"created": "Tue, 28 Sep 2021 09:01:06 GMT",
"version": "v2"
},
{
"created": "Thu, 4 Nov 2021 17:59:14 GMT",
"version": "v3"
}
] | 2021-11-05 | [
[
"Haller",
"Emanuela",
""
],
[
"Burceanu",
"Elena",
""
],
[
"Leordeanu",
"Marius",
""
]
] | The human ability to synchronize the feedback from all their senses inspired recent works in multi-task and multi-modal learning. While these works rely on expensive supervision, our multi-task graph requires only pseudo-labels from expert models. Every graph node represents a task, and each edge learns between tasks transformations. Once initialized, the graph learns self-supervised, based on a novel consensus shift algorithm that intelligently exploits the agreement between graph pathways to generate new pseudo-labels for the next learning cycle. We demonstrate significant improvement from one unsupervised learning iteration to the next, outperforming related recent methods in extensive multi-task learning experiments on two challenging datasets. Our code is available at https://github.com/bit-ml/cshift. |
2408.00724 | Yangzhen Wu | Yangzhen Wu, Zhiqing Sun, Shanda Li, Sean Welleck, Yiming Yang | An Empirical Analysis of Compute-Optimal Inference for Problem-Solving
with Language Models | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The optimal training configurations of large language models (LLMs) with
respect to model sizes and compute budgets have been extensively studied. But
how to optimally configure LLMs during inference has not been explored in
sufficient depth. We study compute-optimal inference: designing models and
inference strategies that optimally trade off additional inference-time compute
for improved performance. As a first step towards understanding and designing
compute-optimal inference methods, we assessed the effectiveness and
computational efficiency of multiple inference strategies such as Greedy
Search, Majority Voting, Best-of-N, Weighted Voting, and their variants on two
different Tree Search algorithms, involving different model sizes and
computational budgets. We found that a smaller language model with a novel tree
search algorithm typically achieves a Pareto-optimal trade-off. These results
highlight the potential benefits of deploying smaller models equipped with more
sophisticated decoding algorithms in budget-constrained scenarios, e.g., on
end-devices, to enhance problem-solving accuracy. For instance, we show that
the Llemma-7B model can achieve competitive accuracy to a Llemma-34B model on
MATH500 while using $2\times$ less FLOPs. Our findings could potentially apply
to any generation task with a well-defined measure of success.
| [
{
"created": "Thu, 1 Aug 2024 17:16:04 GMT",
"version": "v1"
}
] | 2024-08-02 | [
[
"Wu",
"Yangzhen",
""
],
[
"Sun",
"Zhiqing",
""
],
[
"Li",
"Shanda",
""
],
[
"Welleck",
"Sean",
""
],
[
"Yang",
"Yiming",
""
]
] | The optimal training configurations of large language models (LLMs) with respect to model sizes and compute budgets have been extensively studied. But how to optimally configure LLMs during inference has not been explored in sufficient depth. We study compute-optimal inference: designing models and inference strategies that optimally trade off additional inference-time compute for improved performance. As a first step towards understanding and designing compute-optimal inference methods, we assessed the effectiveness and computational efficiency of multiple inference strategies such as Greedy Search, Majority Voting, Best-of-N, Weighted Voting, and their variants on two different Tree Search algorithms, involving different model sizes and computational budgets. We found that a smaller language model with a novel tree search algorithm typically achieves a Pareto-optimal trade-off. These results highlight the potential benefits of deploying smaller models equipped with more sophisticated decoding algorithms in budget-constrained scenarios, e.g., on end-devices, to enhance problem-solving accuracy. For instance, we show that the Llemma-7B model can achieve competitive accuracy to a Llemma-34B model on MATH500 while using $2\times$ less FLOPs. Our findings could potentially apply to any generation task with a well-defined measure of success. |
1109.5804 | Petr Hlin\v{e}n\'y | Robert Ganian and Petr Hlin\v{e}n\'y and Alexander Langer and Jan
Obdr\v{z}\'alek and Peter Rossmanith and Somnath Sikdar | Lower Bounds on the Complexity of MSO1 Model-Checking | null | null | null | null | cs.LO cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | One of the most important algorithmic meta-theorems is a famous result by
Courcelle, which states that any graph problem definable in monadic
second-order logic with edge-set quantifications (i.e., MSO2 model-checking) is
decidable in linear time on any class of graphs of bounded tree-width.
Recently, Kreutzer and Tazari proved a corresponding complexity lower-bound -
that MSO2 model-checking is not even in XP wrt. the formula size as parameter
for graph classes that are subgraph-closed and whose tree-width is
poly-logarithmically unbounded. Of course, this is not an unconditional result
but holds modulo a certain complexity-theoretic assumption, namely, the
Exponential Time Hypothesis (ETH).
In this paper we present a closely related result. We show that even MSO1
model-checking with a fixed set of vertex labels, but without edge-set
quantifications, is not in XP wrt. the formula size as parameter for graph
classes which are subgraph-closed and whose tree-width is poly-logarithmically
unbounded unless the non-uniform ETH fails. In comparison to Kreutzer and
Tazari; $(1)$ we use a stronger prerequisite, namely non-uniform instead of
uniform ETH, to avoid the effectiveness assumption and the construction of
certain obstructions used in their proofs; and $(2)$ we assume a different set
of problems to be efficiently decidable, namely MSO1-definable properties on
vertex labeled graphs instead of MSO2-definable properties on unlabeled graphs.
Our result has an interesting consequence in the realm of digraph width
measures: Strengthening the recent result, we show that no subdigraph-monotone
measure can be "algorithmically useful", unless it is within a poly-logarithmic
factor of undirected tree-width.
| [
{
"created": "Tue, 27 Sep 2011 08:45:10 GMT",
"version": "v1"
},
{
"created": "Thu, 21 Jun 2012 21:20:13 GMT",
"version": "v2"
}
] | 2012-06-25 | [
[
"Ganian",
"Robert",
""
],
[
"Hliněný",
"Petr",
""
],
[
"Langer",
"Alexander",
""
],
[
"Obdržálek",
"Jan",
""
],
[
"Rossmanith",
"Peter",
""
],
[
"Sikdar",
"Somnath",
""
]
] | One of the most important algorithmic meta-theorems is a famous result by Courcelle, which states that any graph problem definable in monadic second-order logic with edge-set quantifications (i.e., MSO2 model-checking) is decidable in linear time on any class of graphs of bounded tree-width. Recently, Kreutzer and Tazari proved a corresponding complexity lower-bound - that MSO2 model-checking is not even in XP wrt. the formula size as parameter for graph classes that are subgraph-closed and whose tree-width is poly-logarithmically unbounded. Of course, this is not an unconditional result but holds modulo a certain complexity-theoretic assumption, namely, the Exponential Time Hypothesis (ETH). In this paper we present a closely related result. We show that even MSO1 model-checking with a fixed set of vertex labels, but without edge-set quantifications, is not in XP wrt. the formula size as parameter for graph classes which are subgraph-closed and whose tree-width is poly-logarithmically unbounded unless the non-uniform ETH fails. In comparison to Kreutzer and Tazari; $(1)$ we use a stronger prerequisite, namely non-uniform instead of uniform ETH, to avoid the effectiveness assumption and the construction of certain obstructions used in their proofs; and $(2)$ we assume a different set of problems to be efficiently decidable, namely MSO1-definable properties on vertex labeled graphs instead of MSO2-definable properties on unlabeled graphs. Our result has an interesting consequence in the realm of digraph width measures: Strengthening the recent result, we show that no subdigraph-monotone measure can be "algorithmically useful", unless it is within a poly-logarithmic factor of undirected tree-width. |
2310.13304 | Nan Gao | Nan Gao, Sam Nolan, Kaixin Ji, Shakila Khan Rumi, Judith Simone
Heinisch, Christoph Anderson, Klaus David, Flora D. Salim | "Living Within Four Walls": Exploring Emotional and Social Dynamics in
Mobile Usage During Home Confinement | null | null | null | null | cs.HC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Home confinement, a situation experienced by individuals for reasons ranging
from medical quarantines, rehabilitation needs, disability accommodations, and
remote working, is a common yet impactful aspect of modern life. While
essential in various scenarios, confinement within the home environment can
profoundly influence psychological well-being and digital device usage. In this
study, we delve into these effects, utilising the COVID-19 lockdown as a
special case study to draw insights extending to various homebound situations.
We conducted an in-situ study with 32 participants living in states affected by
COVID-19 lockdowns for three weeks and analysed their emotions, well-being,
social roles, and mobile usage behaviours. We extracted user activity from app
usage records in an unsupervised manner, and experimental results revealed that
app usage behaviours are effective indicators of emotional well-being in
confined environments. Our research has great potential for developing
supportive strategies and remote programs, not only for people facing similar
medical isolation situations, but also for individuals in long-term home
confinement, such as those with chronic illnesses, recovering from surgery, or
adapting to permanent remote work arrangements.
| [
{
"created": "Fri, 20 Oct 2023 06:36:28 GMT",
"version": "v1"
},
{
"created": "Sat, 8 Jun 2024 10:16:36 GMT",
"version": "v2"
}
] | 2024-06-11 | [
[
"Gao",
"Nan",
""
],
[
"Nolan",
"Sam",
""
],
[
"Ji",
"Kaixin",
""
],
[
"Rumi",
"Shakila Khan",
""
],
[
"Heinisch",
"Judith Simone",
""
],
[
"Anderson",
"Christoph",
""
],
[
"David",
"Klaus",
""
],
[
"Salim",
"Flora D.",
""
]
] | Home confinement, a situation experienced by individuals for reasons ranging from medical quarantines, rehabilitation needs, disability accommodations, and remote working, is a common yet impactful aspect of modern life. While essential in various scenarios, confinement within the home environment can profoundly influence psychological well-being and digital device usage. In this study, we delve into these effects, utilising the COVID-19 lockdown as a special case study to draw insights extending to various homebound situations. We conducted an in-situ study with 32 participants living in states affected by COVID-19 lockdowns for three weeks and analysed their emotions, well-being, social roles, and mobile usage behaviours. We extracted user activity from app usage records in an unsupervised manner, and experimental results revealed that app usage behaviours are effective indicators of emotional well-being in confined environments. Our research has great potential for developing supportive strategies and remote programs, not only for people facing similar medical isolation situations, but also for individuals in long-term home confinement, such as those with chronic illnesses, recovering from surgery, or adapting to permanent remote work arrangements. |
2207.14072 | Chenning Li | Chenning Li, Li Liu, Zhichao Cao, Mi Zhang | WiVelo: Fine-grained Walking Velocity Estimation for Wi-Fi Passive
Tracking | Proceedings of IEEE SECON, 2022 | null | null | null | cs.HC eess.SP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Passive human tracking via Wi-Fi has been researched broadly in the past
decade. Besides straight-forward anchor point localization, velocity is another
vital sign adopted by the existing approaches to infer user trajectory.
However, state-of-the-art Wi-Fi velocity estimation relies on
Doppler-Frequency-Shift (DFS) which suffers from the inevitable signal noise
incurring unbounded velocity errors, further degrading the tracking accuracy.
In this paper, we present WiVelo\footnote{Code\&datasets are available at
\textit{https://github.com/liecn/WiVelo\_SECON22}} that explores new
spatial-temporal signal correlation features observed from different antennas
to achieve accurate velocity estimation. First, we use subcarrier shift
distribution (SSD) extracted from channel state information (CSI) to define two
correlation features for direction and speed estimation, separately. Then, we
design a mesh model calculated by the antennas' locations to enable a
fine-grained velocity estimation with bounded direction error. Finally, with
the continuously estimated velocity, we develop an end-to-end trajectory
recovery algorithm to mitigate velocity outliers with the property of walking
velocity continuity. We implement WiVelo on commodity Wi-Fi hardware and
extensively evaluate its tracking accuracy in various environments. The
experimental results show our median and 90\% tracking errors are 0.47~m and
1.06~m, which are half and a quarter of state-of-the-arts.
| [
{
"created": "Thu, 28 Jul 2022 13:24:07 GMT",
"version": "v1"
}
] | 2022-07-29 | [
[
"Li",
"Chenning",
""
],
[
"Liu",
"Li",
""
],
[
"Cao",
"Zhichao",
""
],
[
"Zhang",
"Mi",
""
]
] | Passive human tracking via Wi-Fi has been researched broadly in the past decade. Besides straight-forward anchor point localization, velocity is another vital sign adopted by the existing approaches to infer user trajectory. However, state-of-the-art Wi-Fi velocity estimation relies on Doppler-Frequency-Shift (DFS) which suffers from the inevitable signal noise incurring unbounded velocity errors, further degrading the tracking accuracy. In this paper, we present WiVelo\footnote{Code\&datasets are available at \textit{https://github.com/liecn/WiVelo\_SECON22}} that explores new spatial-temporal signal correlation features observed from different antennas to achieve accurate velocity estimation. First, we use subcarrier shift distribution (SSD) extracted from channel state information (CSI) to define two correlation features for direction and speed estimation, separately. Then, we design a mesh model calculated by the antennas' locations to enable a fine-grained velocity estimation with bounded direction error. Finally, with the continuously estimated velocity, we develop an end-to-end trajectory recovery algorithm to mitigate velocity outliers with the property of walking velocity continuity. We implement WiVelo on commodity Wi-Fi hardware and extensively evaluate its tracking accuracy in various environments. The experimental results show our median and 90\% tracking errors are 0.47~m and 1.06~m, which are half and a quarter of state-of-the-arts. |
2102.12013 | Jianfeng Chi | Jianfeng Chi, Yuan Tian, Geoffrey J. Gordon, Han Zhao | Understanding and Mitigating Accuracy Disparity in Regression | ICML 2021 | null | null | null | cs.LG cs.CY stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the widespread deployment of large-scale prediction systems in
high-stakes domains, e.g., face recognition, criminal justice, etc., disparity
in prediction accuracy between different demographic subgroups has called for
fundamental understanding on the source of such disparity and algorithmic
intervention to mitigate it. In this paper, we study the accuracy disparity
problem in regression. To begin with, we first propose an error decomposition
theorem, which decomposes the accuracy disparity into the distance between
marginal label distributions and the distance between conditional
representations, to help explain why such accuracy disparity appears in
practice. Motivated by this error decomposition and the general idea of
distribution alignment with statistical distances, we then propose an algorithm
to reduce this disparity, and analyze its game-theoretic optima of the proposed
objective functions. To corroborate our theoretical findings, we also conduct
experiments on five benchmark datasets. The experimental results suggest that
our proposed algorithms can effectively mitigate accuracy disparity while
maintaining the predictive power of the regression models.
| [
{
"created": "Wed, 24 Feb 2021 01:24:50 GMT",
"version": "v1"
},
{
"created": "Sun, 13 Jun 2021 01:41:56 GMT",
"version": "v2"
}
] | 2021-06-15 | [
[
"Chi",
"Jianfeng",
""
],
[
"Tian",
"Yuan",
""
],
[
"Gordon",
"Geoffrey J.",
""
],
[
"Zhao",
"Han",
""
]
] | With the widespread deployment of large-scale prediction systems in high-stakes domains, e.g., face recognition, criminal justice, etc., disparity in prediction accuracy between different demographic subgroups has called for fundamental understanding on the source of such disparity and algorithmic intervention to mitigate it. In this paper, we study the accuracy disparity problem in regression. To begin with, we first propose an error decomposition theorem, which decomposes the accuracy disparity into the distance between marginal label distributions and the distance between conditional representations, to help explain why such accuracy disparity appears in practice. Motivated by this error decomposition and the general idea of distribution alignment with statistical distances, we then propose an algorithm to reduce this disparity, and analyze its game-theoretic optima of the proposed objective functions. To corroborate our theoretical findings, we also conduct experiments on five benchmark datasets. The experimental results suggest that our proposed algorithms can effectively mitigate accuracy disparity while maintaining the predictive power of the regression models. |
2103.06546 | Li Fan | Fan Li, Yongming Li, Pin Wang, Jie Xiao, Fang Yan, Xinke Li | Integrated Age Estimation Mechanism | null | null | null | null | cs.CV cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Machine-learning-based age estimation has received lots of attention.
Traditional age estimation mechanism focuses estimation age error, but ignores
that there is a deviation between the estimated age and real age due to
disease. Pathological age estimation mechanism the author proposed before
introduces age deviation to solve the above problem and improves classification
capability of the estimated age significantly. However,it does not consider the
age estimation error of the normal control (NC) group and results in a larger
error between the estimated age and real age of NC group. Therefore, an
integrated age estimation mechanism based on Decision-Level fusion of error and
deviation orientation model is proposed to solve the problem.Firstly, the
traditional age estimation and pathological age estimation mechanisms are
weighted together.Secondly, their optimal weights are obtained by minimizing
mean absolute error (MAE) between the estimated age and real age of normal
people. In the experimental section, several representative age-related
datasets are used for verification of the proposed method. The results show
that the proposed age estimation mechanism achieves a good tradeoff effect of
age estimation. It not only improves the classification ability of the
estimated age, but also reduces the age estimation error of the NC group. In
general, the proposed age estimation mechanism is effective. Additionally, the
mechanism is a framework mechanism that can be used to construct different
specific age estimation algorithms, contributing to relevant research.
| [
{
"created": "Thu, 11 Mar 2021 09:14:10 GMT",
"version": "v1"
}
] | 2021-03-12 | [
[
"Li",
"Fan",
""
],
[
"Li",
"Yongming",
""
],
[
"Wang",
"Pin",
""
],
[
"Xiao",
"Jie",
""
],
[
"Yan",
"Fang",
""
],
[
"Li",
"Xinke",
""
]
] | Machine-learning-based age estimation has received lots of attention. Traditional age estimation mechanism focuses estimation age error, but ignores that there is a deviation between the estimated age and real age due to disease. Pathological age estimation mechanism the author proposed before introduces age deviation to solve the above problem and improves classification capability of the estimated age significantly. However,it does not consider the age estimation error of the normal control (NC) group and results in a larger error between the estimated age and real age of NC group. Therefore, an integrated age estimation mechanism based on Decision-Level fusion of error and deviation orientation model is proposed to solve the problem.Firstly, the traditional age estimation and pathological age estimation mechanisms are weighted together.Secondly, their optimal weights are obtained by minimizing mean absolute error (MAE) between the estimated age and real age of normal people. In the experimental section, several representative age-related datasets are used for verification of the proposed method. The results show that the proposed age estimation mechanism achieves a good tradeoff effect of age estimation. It not only improves the classification ability of the estimated age, but also reduces the age estimation error of the NC group. In general, the proposed age estimation mechanism is effective. Additionally, the mechanism is a framework mechanism that can be used to construct different specific age estimation algorithms, contributing to relevant research. |
2209.03726 | Gabriel Kasmi | Gabriel Kasmi, Yves-Marie Saint-Drenan, David Trebosc, Rapha\"el
Jolivet, Jonathan Leloux, Babacar Sarr, Laurent Dubus | A crowdsourced dataset of aerial images with annotated solar
photovoltaic arrays and installation metadata | 12 pages, 3 figures, 7 tables, revised preprint resubmitted to
Scientific Data | null | 10.1038/s41597-023-01951-4 | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Photovoltaic (PV) energy generation plays a crucial role in the energy
transition. Small-scale PV installations are deployed at an unprecedented pace,
and their integration into the grid can be challenging since public authorities
often lack quality data about them. Overhead imagery is increasingly used to
improve the knowledge of residential PV installations with machine learning
models capable of automatically mapping these installations. However, these
models cannot be easily transferred from one region or data source to another
due to differences in image acquisition. To address this issue known as domain
shift and foster the development of PV array mapping pipelines, we propose a
dataset containing aerial images, annotations, and segmentation masks. We
provide installation metadata for more than 28,000 installations. We provide
ground truth segmentation masks for 13,000 installations, including 7,000 with
annotations for two different image providers. Finally, we provide installation
metadata that matches the annotation for more than 8,000 installations. Dataset
applications include end-to-end PV registry construction, robust PV
installations mapping, and analysis of crowdsourced datasets.
| [
{
"created": "Thu, 8 Sep 2022 11:42:53 GMT",
"version": "v1"
},
{
"created": "Thu, 8 Dec 2022 13:38:57 GMT",
"version": "v2"
}
] | 2023-02-01 | [
[
"Kasmi",
"Gabriel",
""
],
[
"Saint-Drenan",
"Yves-Marie",
""
],
[
"Trebosc",
"David",
""
],
[
"Jolivet",
"Raphaël",
""
],
[
"Leloux",
"Jonathan",
""
],
[
"Sarr",
"Babacar",
""
],
[
"Dubus",
"Laurent",
""
]
] | Photovoltaic (PV) energy generation plays a crucial role in the energy transition. Small-scale PV installations are deployed at an unprecedented pace, and their integration into the grid can be challenging since public authorities often lack quality data about them. Overhead imagery is increasingly used to improve the knowledge of residential PV installations with machine learning models capable of automatically mapping these installations. However, these models cannot be easily transferred from one region or data source to another due to differences in image acquisition. To address this issue known as domain shift and foster the development of PV array mapping pipelines, we propose a dataset containing aerial images, annotations, and segmentation masks. We provide installation metadata for more than 28,000 installations. We provide ground truth segmentation masks for 13,000 installations, including 7,000 with annotations for two different image providers. Finally, we provide installation metadata that matches the annotation for more than 8,000 installations. Dataset applications include end-to-end PV registry construction, robust PV installations mapping, and analysis of crowdsourced datasets. |
2311.13538 | Zhicheng Yang | Zhicheng Yang, Yinya Huang, Jing Xiong, Liang Feng, Xiaodan Liang,
Yiwei Wang, Jing Tang | AlignedCoT: Prompting Large Language Models via Native-Speaking
Demonstrations | 8 pages, 6 figures | null | null | null | cs.AI cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Large Language Models prompting, such as using in-context demonstrations, is
a mainstream technique for invoking LLMs to perform high-performance and solid
complex reasoning (e.g., mathematical reasoning, commonsense reasoning), and
has the potential for further human-machine collaborative scientific findings.
However, current LLMs are delicate and elusive in prompt words and styles. And
there is an unseen gap between LLM understanding and human-written prompts.
This paper introduces AlignedCoT, an LLM-acquainted prompting technique that
includes proficient "native-speaking" in in-context learning for the LLMs.
Specifically, it achieves consistent and correct step-wise prompts in zero-shot
scenarios by progressively probing, refining, and formatting the LLM chain of
thoughts so that free from handcrafted few-shot demonstrations while
maintaining the prompt quality. We conduct experiments on mathematical
reasoning and commonsense reasoning. We find that LLMs with AlignedCoT perform
significantly superior to them with human-crafted demonstrations. We further
apply AlignedCoT for rewriting the GSM8k training set, resulting in a
GSM8k-Align dataset. We observe its benefits for retrieval augmented
generation.
| [
{
"created": "Wed, 22 Nov 2023 17:24:21 GMT",
"version": "v1"
},
{
"created": "Wed, 10 Jan 2024 14:16:41 GMT",
"version": "v2"
},
{
"created": "Sat, 27 Jan 2024 10:10:20 GMT",
"version": "v3"
},
{
"created": "Sat, 13 Jul 2024 13:36:09 GMT",
"version": "v4"
}
] | 2024-07-16 | [
[
"Yang",
"Zhicheng",
""
],
[
"Huang",
"Yinya",
""
],
[
"Xiong",
"Jing",
""
],
[
"Feng",
"Liang",
""
],
[
"Liang",
"Xiaodan",
""
],
[
"Wang",
"Yiwei",
""
],
[
"Tang",
"Jing",
""
]
] | Large Language Models prompting, such as using in-context demonstrations, is a mainstream technique for invoking LLMs to perform high-performance and solid complex reasoning (e.g., mathematical reasoning, commonsense reasoning), and has the potential for further human-machine collaborative scientific findings. However, current LLMs are delicate and elusive in prompt words and styles. And there is an unseen gap between LLM understanding and human-written prompts. This paper introduces AlignedCoT, an LLM-acquainted prompting technique that includes proficient "native-speaking" in in-context learning for the LLMs. Specifically, it achieves consistent and correct step-wise prompts in zero-shot scenarios by progressively probing, refining, and formatting the LLM chain of thoughts so that free from handcrafted few-shot demonstrations while maintaining the prompt quality. We conduct experiments on mathematical reasoning and commonsense reasoning. We find that LLMs with AlignedCoT perform significantly superior to them with human-crafted demonstrations. We further apply AlignedCoT for rewriting the GSM8k training set, resulting in a GSM8k-Align dataset. We observe its benefits for retrieval augmented generation. |
1904.10885 | Martino Borello | Martino Borello and Wolfgang Willems | Group codes over fields are asymptotically good | 8 pages | null | null | null | cs.IT math.CO math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Group codes are right or left ideals in a group algebra of a finite group
over a finite field. Following ideas of Bazzi and Mitter on group codes over
the binary field, we prove that group codes over finite fields of any
characteristic are asymptotically good.
| [
{
"created": "Wed, 24 Apr 2019 15:53:46 GMT",
"version": "v1"
},
{
"created": "Mon, 20 Jan 2020 13:53:40 GMT",
"version": "v2"
}
] | 2020-01-22 | [
[
"Borello",
"Martino",
""
],
[
"Willems",
"Wolfgang",
""
]
] | Group codes are right or left ideals in a group algebra of a finite group over a finite field. Following ideas of Bazzi and Mitter on group codes over the binary field, we prove that group codes over finite fields of any characteristic are asymptotically good. |
2211.06344 | Wenjun Jiang | Wenjun Jiang and Xiaojun Yuan | Simultaneous Active and Passive Information Transfer for RIS-Aided MIMO
Systems: Iterative Decoding and Evolution Analysis | 15 pages, 7 figures | null | 10.1109/TSP.2023.3272728 | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper investigates the potential of reconfigurable intelligent surface
(RIS) for passive information transfer in a RIS-aided multiple-input
multiple-output (MIMO) system. We propose a novel simultaneous active and
passive information transfer (SAPIT) scheme. In SAPIT, the transmitter (Tx) and
the RIS deliver information simultaneously, where the RIS information is
carried through the RIS phase shifts embedded in reflected signals. We
introduce the coded modulation technique at the Tx and the RIS. The main
challenge of the SAPIT scheme is to simultaneously detect the Tx signals and
the RIS phase coefficients at the receiver. To address this challenge, we
introduce appropriate auxiliary variables to convert the original signal model
into two linear models with respect to the Tx signals and one entry-by-entry
bilinear model with respect to the RIS phase coefficients. With this auxiliary
signal model, we develop a message-passing-based receiver algorithm.
Furthermore, we analyze the fundamental performance limit of the proposed
SAPIT-MIMO transceiver. Notably, we establish the state evolution to predict
the receiver performance in a large-size system. We further analyze the
achievable rates of the Tx and the RIS, which provides insight into the code
design for sum-rate maximization. Numerical results validate our analysis and
show that the SAPIT scheme outperforms the passive beamforming counterpart in
achievable sum rate of the Tx and the RIS.
| [
{
"created": "Fri, 11 Nov 2022 16:45:01 GMT",
"version": "v1"
},
{
"created": "Thu, 27 Apr 2023 03:46:23 GMT",
"version": "v2"
},
{
"created": "Mon, 1 May 2023 02:59:29 GMT",
"version": "v3"
}
] | 2023-05-31 | [
[
"Jiang",
"Wenjun",
""
],
[
"Yuan",
"Xiaojun",
""
]
] | This paper investigates the potential of reconfigurable intelligent surface (RIS) for passive information transfer in a RIS-aided multiple-input multiple-output (MIMO) system. We propose a novel simultaneous active and passive information transfer (SAPIT) scheme. In SAPIT, the transmitter (Tx) and the RIS deliver information simultaneously, where the RIS information is carried through the RIS phase shifts embedded in reflected signals. We introduce the coded modulation technique at the Tx and the RIS. The main challenge of the SAPIT scheme is to simultaneously detect the Tx signals and the RIS phase coefficients at the receiver. To address this challenge, we introduce appropriate auxiliary variables to convert the original signal model into two linear models with respect to the Tx signals and one entry-by-entry bilinear model with respect to the RIS phase coefficients. With this auxiliary signal model, we develop a message-passing-based receiver algorithm. Furthermore, we analyze the fundamental performance limit of the proposed SAPIT-MIMO transceiver. Notably, we establish the state evolution to predict the receiver performance in a large-size system. We further analyze the achievable rates of the Tx and the RIS, which provides insight into the code design for sum-rate maximization. Numerical results validate our analysis and show that the SAPIT scheme outperforms the passive beamforming counterpart in achievable sum rate of the Tx and the RIS. |
1303.6541 | Kai Su | Kai Su, Dan Zhang, Narayan B. Mandayam | Dynamic Radio Resource Management for Random Network Coding: Power
Control and CSMA Backoff Control | 28 pages, 9 figures. Submitted to IEEE Transactions on Wireless
Communications | null | null | null | cs.NI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Resource allocation in wireless networks typically occurs at PHY/MAC layers,
while random network coding (RNC) is a network layer strategy. An interesting
question is how resource allocation mechanisms can be tuned to improve RNC
performance. By means of a differential equation framework which models RNC
throughput in terms of lower layer parameters, we propose a gradient based
approach that can dynamically allocate MAC and PHY layer resources with the
goal of maximizing the minimum network coding throughput among all the
destination nodes in a RNC multicast. We exemplify this general approach with
two resource allocation problems: (i) power control to improve network coding
throughput, and (ii) CSMA mean backoff delay control to improve network coding
throughput. We design both centralized algorithms and online algorithms for
power control and CSMA backoff control. Our evaluations, including numerically
solving the differential equations in the centralized algorithm and an
event-driven simulation for the online algorithm, show that such gradient based
dynamic resource allocation yields significant throughput improvement of the
destination nodes in RNC. Further, our numerical results reveal that network
coding aware power control can regain the broadcast advantage of wireless
transmissions to improve the throughput.
| [
{
"created": "Tue, 26 Mar 2013 16:17:07 GMT",
"version": "v1"
}
] | 2013-03-27 | [
[
"Su",
"Kai",
""
],
[
"Zhang",
"Dan",
""
],
[
"Mandayam",
"Narayan B.",
""
]
] | Resource allocation in wireless networks typically occurs at PHY/MAC layers, while random network coding (RNC) is a network layer strategy. An interesting question is how resource allocation mechanisms can be tuned to improve RNC performance. By means of a differential equation framework which models RNC throughput in terms of lower layer parameters, we propose a gradient based approach that can dynamically allocate MAC and PHY layer resources with the goal of maximizing the minimum network coding throughput among all the destination nodes in a RNC multicast. We exemplify this general approach with two resource allocation problems: (i) power control to improve network coding throughput, and (ii) CSMA mean backoff delay control to improve network coding throughput. We design both centralized algorithms and online algorithms for power control and CSMA backoff control. Our evaluations, including numerically solving the differential equations in the centralized algorithm and an event-driven simulation for the online algorithm, show that such gradient based dynamic resource allocation yields significant throughput improvement of the destination nodes in RNC. Further, our numerical results reveal that network coding aware power control can regain the broadcast advantage of wireless transmissions to improve the throughput. |
1904.04620 | Jiwoong Choi | Jiwoong Choi, Dayoung Chun, Hyun Kim, Hyuk-Jae Lee | Gaussian YOLOv3: An Accurate and Fast Object Detector Using Localization
Uncertainty for Autonomous Driving | ICCV 2019 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The use of object detection algorithms is becoming increasingly important in
autonomous vehicles, and object detection at high accuracy and a fast inference
speed is essential for safe autonomous driving. A false positive (FP) from a
false localization during autonomous driving can lead to fatal accidents and
hinder safe and efficient driving. Therefore, a detection algorithm that can
cope with mislocalizations is required in autonomous driving applications. This
paper proposes a method for improving the detection accuracy while supporting a
real-time operation by modeling the bounding box (bbox) of YOLOv3, which is the
most representative of one-stage detectors, with a Gaussian parameter and
redesigning the loss function. In addition, this paper proposes a method for
predicting the localization uncertainty that indicates the reliability of bbox.
By using the predicted localization uncertainty during the detection process,
the proposed schemes can significantly reduce the FP and increase the true
positive (TP), thereby improving the accuracy. Compared to a conventional
YOLOv3, the proposed algorithm, Gaussian YOLOv3, improves the mean average
precision (mAP) by 3.09 and 3.5 on the KITTI and Berkeley deep drive (BDD)
datasets, respectively. Nevertheless, the proposed algorithm is capable of
real-time detection at faster than 42 frames per second (fps) and shows a
higher accuracy than previous approaches with a similar fps. Therefore, the
proposed algorithm is the most suitable for autonomous driving applications.
| [
{
"created": "Tue, 9 Apr 2019 12:23:55 GMT",
"version": "v1"
},
{
"created": "Mon, 12 Aug 2019 11:11:02 GMT",
"version": "v2"
}
] | 2019-08-13 | [
[
"Choi",
"Jiwoong",
""
],
[
"Chun",
"Dayoung",
""
],
[
"Kim",
"Hyun",
""
],
[
"Lee",
"Hyuk-Jae",
""
]
] | The use of object detection algorithms is becoming increasingly important in autonomous vehicles, and object detection at high accuracy and a fast inference speed is essential for safe autonomous driving. A false positive (FP) from a false localization during autonomous driving can lead to fatal accidents and hinder safe and efficient driving. Therefore, a detection algorithm that can cope with mislocalizations is required in autonomous driving applications. This paper proposes a method for improving the detection accuracy while supporting a real-time operation by modeling the bounding box (bbox) of YOLOv3, which is the most representative of one-stage detectors, with a Gaussian parameter and redesigning the loss function. In addition, this paper proposes a method for predicting the localization uncertainty that indicates the reliability of bbox. By using the predicted localization uncertainty during the detection process, the proposed schemes can significantly reduce the FP and increase the true positive (TP), thereby improving the accuracy. Compared to a conventional YOLOv3, the proposed algorithm, Gaussian YOLOv3, improves the mean average precision (mAP) by 3.09 and 3.5 on the KITTI and Berkeley deep drive (BDD) datasets, respectively. Nevertheless, the proposed algorithm is capable of real-time detection at faster than 42 frames per second (fps) and shows a higher accuracy than previous approaches with a similar fps. Therefore, the proposed algorithm is the most suitable for autonomous driving applications. |
2310.12131 | Dwaipayan Roy | Subinay Adhikary, Sagnik Das, Sagnik Saha, Procheta Sen, Dwaipayan
Roy, Kripabandhu Ghosh | Automated Attribute Extraction from Legal Proceedings | Presented in Mining and Learning in the Legal Domain (MLLD) workshop
2023 | null | null | null | cs.IR | http://creativecommons.org/licenses/by/4.0/ | The escalating number of pending cases is a growing concern world-wide.
Recent advancements in digitization have opened up possibilities for leveraging
artificial intelligence (AI) tools in the processing of legal documents.
Adopting a structured representation for legal documents, as opposed to a mere
bag-of-words flat text representation, can significantly enhance processing
capabilities. With the aim of achieving this objective, we put forward a set of
diverse attributes for criminal case proceedings. We use a state-of-the-art
sequence labeling framework to automatically extract attributes from the legal
documents. Moreover, we demonstrate the efficacy of the extracted attributes in
a downstream task, namely legal judgment prediction.
| [
{
"created": "Wed, 18 Oct 2023 17:41:28 GMT",
"version": "v1"
}
] | 2023-10-19 | [
[
"Adhikary",
"Subinay",
""
],
[
"Das",
"Sagnik",
""
],
[
"Saha",
"Sagnik",
""
],
[
"Sen",
"Procheta",
""
],
[
"Roy",
"Dwaipayan",
""
],
[
"Ghosh",
"Kripabandhu",
""
]
] | The escalating number of pending cases is a growing concern world-wide. Recent advancements in digitization have opened up possibilities for leveraging artificial intelligence (AI) tools in the processing of legal documents. Adopting a structured representation for legal documents, as opposed to a mere bag-of-words flat text representation, can significantly enhance processing capabilities. With the aim of achieving this objective, we put forward a set of diverse attributes for criminal case proceedings. We use a state-of-the-art sequence labeling framework to automatically extract attributes from the legal documents. Moreover, we demonstrate the efficacy of the extracted attributes in a downstream task, namely legal judgment prediction. |
2107.11777 | Yujie Tang | Yujie Tang, Liang Hu, Qingrui Zhang and Wei Pan | Reinforcement Learning Compensated Extended Kalman Filter for Attitude
Estimation | This paper has been accepted by IROS 2021 | null | null | null | cs.RO | http://creativecommons.org/licenses/by/4.0/ | Inertial measurement units are widely used in different fields to estimate
the attitude. Many algorithms have been proposed to improve estimation
performance. However, most of them still suffer from 1) inaccurate initial
estimation, 2) inaccurate initial filter gain, and 3) non-Gaussian process
and/or measurement noise. In this paper, we leverage reinforcement learning to
compensate for the classical extended Kalman filter estimation, i.e., to learn
the filter gain from the sensor measurements. We also analyse the convergence
of the estimate error. The effectiveness of the proposed algorithm is validated
on both simulated data and real data.
| [
{
"created": "Sun, 25 Jul 2021 10:39:32 GMT",
"version": "v1"
},
{
"created": "Tue, 27 Jul 2021 16:14:29 GMT",
"version": "v2"
}
] | 2021-07-28 | [
[
"Tang",
"Yujie",
""
],
[
"Hu",
"Liang",
""
],
[
"Zhang",
"Qingrui",
""
],
[
"Pan",
"Wei",
""
]
] | Inertial measurement units are widely used in different fields to estimate the attitude. Many algorithms have been proposed to improve estimation performance. However, most of them still suffer from 1) inaccurate initial estimation, 2) inaccurate initial filter gain, and 3) non-Gaussian process and/or measurement noise. In this paper, we leverage reinforcement learning to compensate for the classical extended Kalman filter estimation, i.e., to learn the filter gain from the sensor measurements. We also analyse the convergence of the estimate error. The effectiveness of the proposed algorithm is validated on both simulated data and real data. |
0912.5079 | Travis Gagie | Travis Gagie | A Lower Bound on the Complexity of Approximating the Entropy of a Markov
Source | null | null | null | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Suppose that, for any (k \geq 1), (\epsilon > 0) and sufficiently large
$\sigma$, we are given a black box that allows us to sample characters from a
$k$th-order Markov source over the alphabet (\{0, ..., \sigma - 1\}). Even if
we know the source has entropy either 0 or at least (\log (\sigma - k)), there
is still no algorithm that, with probability bounded away from (1 / 2), guesses
the entropy correctly after sampling at most ((\sigma - k)^{k / 2 - \epsilon})
characters.
| [
{
"created": "Sun, 27 Dec 2009 14:39:30 GMT",
"version": "v1"
}
] | 2009-12-31 | [
[
"Gagie",
"Travis",
""
]
] | Suppose that, for any (k \geq 1), (\epsilon > 0) and sufficiently large $\sigma$, we are given a black box that allows us to sample characters from a $k$th-order Markov source over the alphabet (\{0, ..., \sigma - 1\}). Even if we know the source has entropy either 0 or at least (\log (\sigma - k)), there is still no algorithm that, with probability bounded away from (1 / 2), guesses the entropy correctly after sampling at most ((\sigma - k)^{k / 2 - \epsilon}) characters. |
1310.5474 | Safeeullah Soomro | Syed Asif Ali, Safeeullah Soomro, Abdul Ghafoor Memon and Abdul Baqi | Implementation of Automata Theory to Improve the Learning Disability | null | Sindh Univ. Res. Jour. (Sci. Ser.) Vol. 45 (1):1193-196 (2013) | null | null | cs.CY | http://creativecommons.org/licenses/by/3.0/ | There are various types of disability egress in world like blindness,
deafness, and Physical disabilities. It is quite difficult to deal with people
with disability. Learning disability (LD) is types of disability totally
different from general disability. To deal children with learning disability is
difficult for both parents and teacher. As parent deal with only single child
so it bit easy. But teacher deals with different students at a time so its more
difficult to deal with group of students with learning disability. If there is
more students with learning disability so it is necessary that first all
identify the type of learning disability in group of students. Some students
have learning disability of mathematics; some have learning disability of other
subjects. By using theory of Automata it easy to analysis the level of
disability among all students then deal with them accordingly. For these
purpose deterministic automata is the best practice. Teacher deals with
deterministic students in class and check there response. In this research
deterministic automata is use to facilitated the teacher which help teacher in
identification of students with learning disability.
| [
{
"created": "Mon, 21 Oct 2013 09:25:19 GMT",
"version": "v1"
}
] | 2013-10-22 | [
[
"Ali",
"Syed Asif",
""
],
[
"Soomro",
"Safeeullah",
""
],
[
"Memon",
"Abdul Ghafoor",
""
],
[
"Baqi",
"Abdul",
""
]
] | There are various types of disability egress in world like blindness, deafness, and Physical disabilities. It is quite difficult to deal with people with disability. Learning disability (LD) is types of disability totally different from general disability. To deal children with learning disability is difficult for both parents and teacher. As parent deal with only single child so it bit easy. But teacher deals with different students at a time so its more difficult to deal with group of students with learning disability. If there is more students with learning disability so it is necessary that first all identify the type of learning disability in group of students. Some students have learning disability of mathematics; some have learning disability of other subjects. By using theory of Automata it easy to analysis the level of disability among all students then deal with them accordingly. For these purpose deterministic automata is the best practice. Teacher deals with deterministic students in class and check there response. In this research deterministic automata is use to facilitated the teacher which help teacher in identification of students with learning disability. |
1410.4954 | Mohammad Mansour | Mohammad M. Mansour | Pruned Bit-Reversal Permutations: Mathematical Characterization, Fast
Algorithms and Architectures | 31 pages | null | null | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A mathematical characterization of serially-pruned permutations (SPPs)
employed in variable-length permuters and their associated fast pruning
algorithms and architectures are proposed. Permuters are used in many signal
processing systems for shuffling data and in communication systems as an
adjunct to coding for error correction. Typically only a small set of discrete
permuter lengths are supported. Serial pruning is a simple technique to alter
the length of a permutation to support a wider range of lengths, but results in
a serial processing bottleneck. In this paper, parallelizing SPPs is formulated
in terms of recursively computing sums involving integer floor and related
functions using integer operations, in a fashion analogous to evaluating
Dedekind sums. A mathematical treatment for bit-reversal permutations (BRPs) is
presented, and closed-form expressions for BRP statistics are derived. It is
shown that BRP sequences have weak correlation properties. A new statistic
called permutation inliers that characterizes the pruning gap of pruned
interleavers is proposed. Using this statistic, a recursive algorithm that
computes the minimum inliers count of a pruned BR interleaver (PBRI) in
logarithmic time complexity is presented. This algorithm enables parallelizing
a serial PBRI algorithm by any desired parallelism factor by computing the
pruning gap in lookahead rather than a serial fashion, resulting in significant
reduction in interleaving latency and memory overhead. Extensions to 2-D block
and stream interleavers, as well as applications to pruned fast Fourier
transforms and LTE turbo interleavers, are also presented. Moreover,
hardware-efficient architectures for the proposed algorithms are developed.
Simulation results demonstrate 3 to 4 orders of magnitude improvement in
interleaving time compared to existing approaches.
| [
{
"created": "Sat, 18 Oct 2014 13:03:19 GMT",
"version": "v1"
}
] | 2014-10-21 | [
[
"Mansour",
"Mohammad M.",
""
]
] | A mathematical characterization of serially-pruned permutations (SPPs) employed in variable-length permuters and their associated fast pruning algorithms and architectures are proposed. Permuters are used in many signal processing systems for shuffling data and in communication systems as an adjunct to coding for error correction. Typically only a small set of discrete permuter lengths are supported. Serial pruning is a simple technique to alter the length of a permutation to support a wider range of lengths, but results in a serial processing bottleneck. In this paper, parallelizing SPPs is formulated in terms of recursively computing sums involving integer floor and related functions using integer operations, in a fashion analogous to evaluating Dedekind sums. A mathematical treatment for bit-reversal permutations (BRPs) is presented, and closed-form expressions for BRP statistics are derived. It is shown that BRP sequences have weak correlation properties. A new statistic called permutation inliers that characterizes the pruning gap of pruned interleavers is proposed. Using this statistic, a recursive algorithm that computes the minimum inliers count of a pruned BR interleaver (PBRI) in logarithmic time complexity is presented. This algorithm enables parallelizing a serial PBRI algorithm by any desired parallelism factor by computing the pruning gap in lookahead rather than a serial fashion, resulting in significant reduction in interleaving latency and memory overhead. Extensions to 2-D block and stream interleavers, as well as applications to pruned fast Fourier transforms and LTE turbo interleavers, are also presented. Moreover, hardware-efficient architectures for the proposed algorithms are developed. Simulation results demonstrate 3 to 4 orders of magnitude improvement in interleaving time compared to existing approaches. |
2405.16064 | Kaituo Feng | Kaituo Feng, Changsheng Li, Xiaolu Zhang, Jun Zhou, Ye Yuan, Guoren
Wang | Keypoint-based Progressive Chain-of-Thought Distillation for LLMs | Accepted by ICML 2024 | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Chain-of-thought distillation is a powerful technique for transferring
reasoning abilities from large language models (LLMs) to smaller student
models. Previous methods typically require the student to mimic the
step-by-step rationale produced by LLMs, often facing the following challenges:
(i) Tokens within a rationale vary in significance, and treating them equally
may fail to accurately mimic keypoint tokens, leading to reasoning errors. (ii)
They usually distill knowledge by consistently predicting all the steps in a
rationale, which falls short in distinguishing the learning order of step
generation. This diverges from the human cognitive progression of starting with
easy tasks and advancing to harder ones, resulting in sub-optimal outcomes. To
this end, we propose a unified framework, called KPOD, to address these issues.
Specifically, we propose a token weighting module utilizing mask learning to
encourage accurate mimicry of keypoint tokens by the student during
distillation. Besides, we develop an in-rationale progressive distillation
strategy, starting with training the student to generate the final reasoning
steps and gradually extending to cover the entire rationale. To accomplish
this, a weighted token generation loss is proposed to assess step reasoning
difficulty, and a value function is devised to schedule the progressive
distillation by considering both step difficulty and question diversity.
Extensive experiments on four reasoning benchmarks illustrate our KPOD
outperforms previous methods by a large margin.
| [
{
"created": "Sat, 25 May 2024 05:27:38 GMT",
"version": "v1"
}
] | 2024-05-28 | [
[
"Feng",
"Kaituo",
""
],
[
"Li",
"Changsheng",
""
],
[
"Zhang",
"Xiaolu",
""
],
[
"Zhou",
"Jun",
""
],
[
"Yuan",
"Ye",
""
],
[
"Wang",
"Guoren",
""
]
] | Chain-of-thought distillation is a powerful technique for transferring reasoning abilities from large language models (LLMs) to smaller student models. Previous methods typically require the student to mimic the step-by-step rationale produced by LLMs, often facing the following challenges: (i) Tokens within a rationale vary in significance, and treating them equally may fail to accurately mimic keypoint tokens, leading to reasoning errors. (ii) They usually distill knowledge by consistently predicting all the steps in a rationale, which falls short in distinguishing the learning order of step generation. This diverges from the human cognitive progression of starting with easy tasks and advancing to harder ones, resulting in sub-optimal outcomes. To this end, we propose a unified framework, called KPOD, to address these issues. Specifically, we propose a token weighting module utilizing mask learning to encourage accurate mimicry of keypoint tokens by the student during distillation. Besides, we develop an in-rationale progressive distillation strategy, starting with training the student to generate the final reasoning steps and gradually extending to cover the entire rationale. To accomplish this, a weighted token generation loss is proposed to assess step reasoning difficulty, and a value function is devised to schedule the progressive distillation by considering both step difficulty and question diversity. Extensive experiments on four reasoning benchmarks illustrate our KPOD outperforms previous methods by a large margin. |
2211.15602 | Ritesh Goenka | Ritesh Goenka, Eashan Gupta, Sushil Khyalia, Pratyush Agarwal, Mulinti
Shaik Wajid, Shivaram Kalyanakrishnan | Upper Bounds for All and Max-gain Policy Iteration Algorithms on
Deterministic MDPs | Added new bounds for two state MDPs | null | null | null | cs.DM cs.CC math.CO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Policy Iteration (PI) is a widely used family of algorithms to compute
optimal policies for Markov Decision Problems (MDPs). We derive upper bounds on
the running time of PI on Deterministic MDPs (DMDPs): the class of MDPs in
which every state-action pair has a unique next state. Our results include a
non-trivial upper bound that applies to the entire family of PI algorithms;
another to all "max-gain" switching variants; and affirmation that a conjecture
regarding Howard's PI on MDPs is true for DMDPs. Our analysis is based on
certain graph-theoretic results, which may be of independent interest.
| [
{
"created": "Mon, 28 Nov 2022 17:56:30 GMT",
"version": "v1"
},
{
"created": "Sun, 8 Oct 2023 20:19:31 GMT",
"version": "v2"
}
] | 2023-10-10 | [
[
"Goenka",
"Ritesh",
""
],
[
"Gupta",
"Eashan",
""
],
[
"Khyalia",
"Sushil",
""
],
[
"Agarwal",
"Pratyush",
""
],
[
"Wajid",
"Mulinti Shaik",
""
],
[
"Kalyanakrishnan",
"Shivaram",
""
]
] | Policy Iteration (PI) is a widely used family of algorithms to compute optimal policies for Markov Decision Problems (MDPs). We derive upper bounds on the running time of PI on Deterministic MDPs (DMDPs): the class of MDPs in which every state-action pair has a unique next state. Our results include a non-trivial upper bound that applies to the entire family of PI algorithms; another to all "max-gain" switching variants; and affirmation that a conjecture regarding Howard's PI on MDPs is true for DMDPs. Our analysis is based on certain graph-theoretic results, which may be of independent interest. |
2401.04334 | Yiwei Li | Jiaqi Wang, Zihao Wu, Yiwei Li, Hanqi Jiang, Peng Shu, Enze Shi,
Huawen Hu, Chong Ma, Yiheng Liu, Xuhui Wang, Yincheng Yao, Xuan Liu, Huaqin
Zhao, Zhengliang Liu, Haixing Dai, Lin Zhao, Bao Ge, Xiang Li, Tianming Liu,
and Shu Zhang | Large Language Models for Robotics: Opportunities, Challenges, and
Perspectives | null | null | null | null | cs.RO cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large language models (LLMs) have undergone significant expansion and have
been increasingly integrated across various domains. Notably, in the realm of
robot task planning, LLMs harness their advanced reasoning and language
comprehension capabilities to formulate precise and efficient action plans
based on natural language instructions. However, for embodied tasks, where
robots interact with complex environments, text-only LLMs often face challenges
due to a lack of compatibility with robotic visual perception. This study
provides a comprehensive overview of the emerging integration of LLMs and
multimodal LLMs into various robotic tasks. Additionally, we propose a
framework that utilizes multimodal GPT-4V to enhance embodied task planning
through the combination of natural language instructions and robot visual
perceptions. Our results, based on diverse datasets, indicate that GPT-4V
effectively enhances robot performance in embodied tasks. This extensive survey
and evaluation of LLMs and multimodal LLMs across a variety of robotic tasks
enriches the understanding of LLM-centric embodied intelligence and provides
forward-looking insights toward bridging the gap in Human-Robot-Environment
interaction.
| [
{
"created": "Tue, 9 Jan 2024 03:22:16 GMT",
"version": "v1"
}
] | 2024-01-10 | [
[
"Wang",
"Jiaqi",
""
],
[
"Wu",
"Zihao",
""
],
[
"Li",
"Yiwei",
""
],
[
"Jiang",
"Hanqi",
""
],
[
"Shu",
"Peng",
""
],
[
"Shi",
"Enze",
""
],
[
"Hu",
"Huawen",
""
],
[
"Ma",
"Chong",
""
],
[
"Liu",
"Yiheng",
""
],
[
"Wang",
"Xuhui",
""
],
[
"Yao",
"Yincheng",
""
],
[
"Liu",
"Xuan",
""
],
[
"Zhao",
"Huaqin",
""
],
[
"Liu",
"Zhengliang",
""
],
[
"Dai",
"Haixing",
""
],
[
"Zhao",
"Lin",
""
],
[
"Ge",
"Bao",
""
],
[
"Li",
"Xiang",
""
],
[
"Liu",
"Tianming",
""
],
[
"Zhang",
"Shu",
""
]
] | Large language models (LLMs) have undergone significant expansion and have been increasingly integrated across various domains. Notably, in the realm of robot task planning, LLMs harness their advanced reasoning and language comprehension capabilities to formulate precise and efficient action plans based on natural language instructions. However, for embodied tasks, where robots interact with complex environments, text-only LLMs often face challenges due to a lack of compatibility with robotic visual perception. This study provides a comprehensive overview of the emerging integration of LLMs and multimodal LLMs into various robotic tasks. Additionally, we propose a framework that utilizes multimodal GPT-4V to enhance embodied task planning through the combination of natural language instructions and robot visual perceptions. Our results, based on diverse datasets, indicate that GPT-4V effectively enhances robot performance in embodied tasks. This extensive survey and evaluation of LLMs and multimodal LLMs across a variety of robotic tasks enriches the understanding of LLM-centric embodied intelligence and provides forward-looking insights toward bridging the gap in Human-Robot-Environment interaction. |
2110.06135 | Marc-Andre Schulz | Marc-Andre Schulz, Bertrand Thirion, Alexandre Gramfort, Ga\"el
Varoquaux, Danilo Bzdok | Label scarcity in biomedicine: Data-rich latent factor discovery
enhances phenotype prediction | Accepted at NIPS 2017 Workshop on Machine Learning for Health | null | null | null | cs.LG q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | High-quality data accumulation is now becoming ubiquitous in the health
domain. There is increasing opportunity to exploit rich data from normal
subjects to improve supervised estimators in specific diseases with notorious
data scarcity. We demonstrate that low-dimensional embedding spaces can be
derived from the UK Biobank population dataset and used to enhance data-scarce
prediction of health indicators, lifestyle and demographic characteristics.
Phenotype predictions facilitated by Variational Autoencoder manifolds
typically scaled better with increasing unlabeled data than dimensionality
reduction by PCA or Isomap. Performances gains from semisupervison approaches
will probably become an important ingredient for various medical data science
applications.
| [
{
"created": "Tue, 12 Oct 2021 16:25:50 GMT",
"version": "v1"
}
] | 2021-10-13 | [
[
"Schulz",
"Marc-Andre",
""
],
[
"Thirion",
"Bertrand",
""
],
[
"Gramfort",
"Alexandre",
""
],
[
"Varoquaux",
"Gaël",
""
],
[
"Bzdok",
"Danilo",
""
]
] | High-quality data accumulation is now becoming ubiquitous in the health domain. There is increasing opportunity to exploit rich data from normal subjects to improve supervised estimators in specific diseases with notorious data scarcity. We demonstrate that low-dimensional embedding spaces can be derived from the UK Biobank population dataset and used to enhance data-scarce prediction of health indicators, lifestyle and demographic characteristics. Phenotype predictions facilitated by Variational Autoencoder manifolds typically scaled better with increasing unlabeled data than dimensionality reduction by PCA or Isomap. Performances gains from semisupervison approaches will probably become an important ingredient for various medical data science applications. |
2205.06356 | Kabir Ahuja | Kabir Ahuja, Sandipan Dandapat, Sunayana Sitaram, Monojit Choudhury | Beyond Static Models and Test Sets: Benchmarking the Potential of
Pre-trained Models Across Tasks and Languages | NLP Power! Workshop, ACL 2022 | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Although recent Massively Multilingual Language Models (MMLMs) like mBERT and
XLMR support around 100 languages, most existing multilingual NLP benchmarks
provide evaluation data in only a handful of these languages with little
linguistic diversity. We argue that this makes the existing practices in
multilingual evaluation unreliable and does not provide a full picture of the
performance of MMLMs across the linguistic landscape. We propose that the
recent work done in Performance Prediction for NLP tasks can serve as a
potential solution in fixing benchmarking in Multilingual NLP by utilizing
features related to data and language typology to estimate the performance of
an MMLM on different languages. We compare performance prediction with
translating test data with a case study on four different multilingual
datasets, and observe that these methods can provide reliable estimates of the
performance that are often on-par with the translation based approaches,
without the need for any additional translation as well as evaluation costs.
| [
{
"created": "Thu, 12 May 2022 20:42:48 GMT",
"version": "v1"
},
{
"created": "Mon, 14 Nov 2022 16:10:01 GMT",
"version": "v2"
}
] | 2022-11-15 | [
[
"Ahuja",
"Kabir",
""
],
[
"Dandapat",
"Sandipan",
""
],
[
"Sitaram",
"Sunayana",
""
],
[
"Choudhury",
"Monojit",
""
]
] | Although recent Massively Multilingual Language Models (MMLMs) like mBERT and XLMR support around 100 languages, most existing multilingual NLP benchmarks provide evaluation data in only a handful of these languages with little linguistic diversity. We argue that this makes the existing practices in multilingual evaluation unreliable and does not provide a full picture of the performance of MMLMs across the linguistic landscape. We propose that the recent work done in Performance Prediction for NLP tasks can serve as a potential solution in fixing benchmarking in Multilingual NLP by utilizing features related to data and language typology to estimate the performance of an MMLM on different languages. We compare performance prediction with translating test data with a case study on four different multilingual datasets, and observe that these methods can provide reliable estimates of the performance that are often on-par with the translation based approaches, without the need for any additional translation as well as evaluation costs. |
2401.13905 | Thomas Lippincott | Hale Sirin, Tom Lippincott | Dynamic embedded topic models and change-point detection for exploring
literary-historical hypotheses | Accepted to LaTeCH@EACL2024 | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a novel combination of dynamic embedded topic models and
change-point detection to explore diachronic change of lexical semantic
modality in classical and early Christian Latin. We demonstrate several methods
for finding and characterizing patterns in the output, and relating them to
traditional scholarship in Comparative Literature and Classics. This simple
approach to unsupervised models of semantic change can be applied to any
suitable corpus, and we conclude with future directions and refinements aiming
to allow noisier, less-curated materials to meet that threshold.
| [
{
"created": "Thu, 25 Jan 2024 02:50:03 GMT",
"version": "v1"
}
] | 2024-01-26 | [
[
"Sirin",
"Hale",
""
],
[
"Lippincott",
"Tom",
""
]
] | We present a novel combination of dynamic embedded topic models and change-point detection to explore diachronic change of lexical semantic modality in classical and early Christian Latin. We demonstrate several methods for finding and characterizing patterns in the output, and relating them to traditional scholarship in Comparative Literature and Classics. This simple approach to unsupervised models of semantic change can be applied to any suitable corpus, and we conclude with future directions and refinements aiming to allow noisier, less-curated materials to meet that threshold. |
2311.14136 | Ana Fern\'andez Vilas | Carlos Beis-Penedo and Francisco Troncoso-Pastoriza and Rebeca P.
D\'iaz-Redondo and Ana Fern\'andez-Vilas and Manuel Fern\'andez-Veiga and
Mart\'in Gonz\'alez Soto | A Blockchain Solution for Collaborative Machine Learning over IoT | null | null | null | null | cs.LG cs.CR cs.NI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The rapid growth of Internet of Things (IoT) devices and applications has led
to an increased demand for advanced analytics and machine learning techniques
capable of handling the challenges associated with data privacy, security, and
scalability. Federated learning (FL) and blockchain technologies have emerged
as promising approaches to address these challenges by enabling decentralized,
secure, and privacy-preserving model training on distributed data sources. In
this paper, we present a novel IoT solution that combines the incremental
learning vector quantization algorithm (XuILVQ) with Ethereum blockchain
technology to facilitate secure and efficient data sharing, model training, and
prototype storage in a distributed environment. Our proposed architecture
addresses the shortcomings of existing blockchain-based FL solutions by
reducing computational and communication overheads while maintaining data
privacy and security. We assess the performance of our system through a series
of experiments, showcasing its potential to enhance the accuracy and efficiency
of machine learning tasks in IoT settings.
| [
{
"created": "Thu, 23 Nov 2023 18:06:05 GMT",
"version": "v1"
}
] | 2023-11-27 | [
[
"Beis-Penedo",
"Carlos",
""
],
[
"Troncoso-Pastoriza",
"Francisco",
""
],
[
"Díaz-Redondo",
"Rebeca P.",
""
],
[
"Fernández-Vilas",
"Ana",
""
],
[
"Fernández-Veiga",
"Manuel",
""
],
[
"Soto",
"Martín González",
""
]
] | The rapid growth of Internet of Things (IoT) devices and applications has led to an increased demand for advanced analytics and machine learning techniques capable of handling the challenges associated with data privacy, security, and scalability. Federated learning (FL) and blockchain technologies have emerged as promising approaches to address these challenges by enabling decentralized, secure, and privacy-preserving model training on distributed data sources. In this paper, we present a novel IoT solution that combines the incremental learning vector quantization algorithm (XuILVQ) with Ethereum blockchain technology to facilitate secure and efficient data sharing, model training, and prototype storage in a distributed environment. Our proposed architecture addresses the shortcomings of existing blockchain-based FL solutions by reducing computational and communication overheads while maintaining data privacy and security. We assess the performance of our system through a series of experiments, showcasing its potential to enhance the accuracy and efficiency of machine learning tasks in IoT settings. |
2002.07775 | Jeena Kleenankandy | Jeena Kleenankandy, K. A. Abdul Nazeer (Department of Computer Science
and Engineering, National Institute of Technology Calicut, Kerala, India) | An enhanced Tree-LSTM architecture for sentence semantic modeling using
typed dependencies | Accepted manuscript submitted to Journal of Information Processing
and Management ( Elsevier ) on June 11, 2020 | Information Processing & Management, Elsevier (2020) | 10.1016/j.ipm.2020.102362 | null | cs.CL cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Tree-based Long short term memory (LSTM) network has become state-of-the-art
for modeling the meaning of language texts as they can effectively exploit the
grammatical syntax and thereby non-linear dependencies among words of the
sentence. However, most of these models cannot recognize the difference in
meaning caused by a change in semantic roles of words or phrases because they
do not acknowledge the type of grammatical relations, also known as typed
dependencies, in sentence structure. This paper proposes an enhanced LSTM
architecture, called relation gated LSTM, which can model the relationship
between two inputs of a sequence using a control input. We also introduce a
Tree-LSTM model called Typed Dependency Tree-LSTM that uses the sentence
dependency parse structure as well as the dependency type to embed sentence
meaning into a dense vector. The proposed model outperformed its type-unaware
counterpart in two typical NLP tasks - Semantic Relatedness Scoring and
Sentiment Analysis, in a lesser number of training epochs. The results were
comparable or competitive with other state-of-the-art models. Qualitative
analysis showed that changes in the voice of sentences had little effect on the
model's predicted scores, while changes in nominal (noun) words had a more
significant impact. The model recognized subtle semantic relationships in
sentence pairs. The magnitudes of learned typed dependencies embeddings were
also in agreement with human intuitions. The research findings imply the
significance of grammatical relations in sentence modeling. The proposed models
would serve as a base for future researches in this direction.
| [
{
"created": "Tue, 18 Feb 2020 18:10:03 GMT",
"version": "v1"
},
{
"created": "Fri, 25 Sep 2020 09:45:26 GMT",
"version": "v2"
}
] | 2020-09-28 | [
[
"Kleenankandy",
"Jeena",
"",
"Department of Computer Science\n and Engineering, National Institute of Technology Calicut, Kerala, India"
],
[
"Nazeer",
"K. A. Abdul",
"",
"Department of Computer Science\n and Engineering, National Institute of Technology Calicut, Kerala, India"
]
] | Tree-based Long short term memory (LSTM) network has become state-of-the-art for modeling the meaning of language texts as they can effectively exploit the grammatical syntax and thereby non-linear dependencies among words of the sentence. However, most of these models cannot recognize the difference in meaning caused by a change in semantic roles of words or phrases because they do not acknowledge the type of grammatical relations, also known as typed dependencies, in sentence structure. This paper proposes an enhanced LSTM architecture, called relation gated LSTM, which can model the relationship between two inputs of a sequence using a control input. We also introduce a Tree-LSTM model called Typed Dependency Tree-LSTM that uses the sentence dependency parse structure as well as the dependency type to embed sentence meaning into a dense vector. The proposed model outperformed its type-unaware counterpart in two typical NLP tasks - Semantic Relatedness Scoring and Sentiment Analysis, in a lesser number of training epochs. The results were comparable or competitive with other state-of-the-art models. Qualitative analysis showed that changes in the voice of sentences had little effect on the model's predicted scores, while changes in nominal (noun) words had a more significant impact. The model recognized subtle semantic relationships in sentence pairs. The magnitudes of learned typed dependencies embeddings were also in agreement with human intuitions. The research findings imply the significance of grammatical relations in sentence modeling. The proposed models would serve as a base for future researches in this direction. |
1410.0277 | Christian H\"ager | Christian H\"ager, Alexandre Graell i Amat, Fredrik Br\"annstr\"om,
Alex Alvarado, Erik Agrell | Terminated and Tailbiting Spatially-Coupled Codes with Optimized Bit
Mappings for Spectrally Efficient Fiber-Optical Systems | This paper has been accepted for publication in the IEEE/OSA Journal
of Lightwave Technology | IEEE/OSA Journal of Lightwave Technology, vol. 33, no. 7, pp.
1275-1285 Apr. 2015 [invited] | 10.1109/JLT.2015.2390596 | null | cs.IT math.IT physics.optics | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study the design of spectrally efficient fiber-optical communication
systems based on different spatially coupled (SC) forward error correction
(FEC) schemes. In particular, we optimize the allocation of the coded bits from
the FEC encoder to the modulation bits of the signal constellation. Two SC code
classes are considered. The codes in the first class are protograph-based
low-density parity-check (LDPC) codes which are decoded using iterative
soft-decision decoding. The codes in the second class are generalized LDPC
codes which are decoded using iterative hard-decision decoding. For both code
classes, the bit allocation is optimized for the terminated and tailbiting SC
cases based on a density evolution analysis. An optimized bit allocation can
significantly improve the performance of tailbiting SC codes codes over the
baseline sequential allocation, up to the point where they have a comparable
gap to capacity as their terminated counterparts, at a lower FEC overhead. For
the considered terminated SC codes, the optimization only results in marginal
performance improvements, suggesting that in this case a sequential allocation
is close to optimal.
| [
{
"created": "Wed, 1 Oct 2014 16:30:21 GMT",
"version": "v1"
},
{
"created": "Thu, 8 Jan 2015 16:19:29 GMT",
"version": "v2"
}
] | 2024-01-30 | [
[
"Häger",
"Christian",
""
],
[
"Amat",
"Alexandre Graell i",
""
],
[
"Brännström",
"Fredrik",
""
],
[
"Alvarado",
"Alex",
""
],
[
"Agrell",
"Erik",
""
]
] | We study the design of spectrally efficient fiber-optical communication systems based on different spatially coupled (SC) forward error correction (FEC) schemes. In particular, we optimize the allocation of the coded bits from the FEC encoder to the modulation bits of the signal constellation. Two SC code classes are considered. The codes in the first class are protograph-based low-density parity-check (LDPC) codes which are decoded using iterative soft-decision decoding. The codes in the second class are generalized LDPC codes which are decoded using iterative hard-decision decoding. For both code classes, the bit allocation is optimized for the terminated and tailbiting SC cases based on a density evolution analysis. An optimized bit allocation can significantly improve the performance of tailbiting SC codes codes over the baseline sequential allocation, up to the point where they have a comparable gap to capacity as their terminated counterparts, at a lower FEC overhead. For the considered terminated SC codes, the optimization only results in marginal performance improvements, suggesting that in this case a sequential allocation is close to optimal. |
1705.07807 | Piotr Mardziel | Anupam Datta, Matthew Fredrikson, Gihyuk Ko, Piotr Mardziel, Shayak
Sen | Use Privacy in Data-Driven Systems: Theory and Experiments with Machine
Learnt Programs | extended CCS 2017 camera-ready: several new discussions, and
complexity results added to appendix | null | null | null | cs.CR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents an approach to formalizing and enforcing a class of use
privacy properties in data-driven systems. In contrast to prior work, we focus
on use restrictions on proxies (i.e. strong predictors) of protected
information types. Our definition relates proxy use to intermediate
computations that occur in a program, and identify two essential properties
that characterize this behavior: 1) its result is strongly associated with the
protected information type in question, and 2) it is likely to causally affect
the final output of the program. For a specific instantiation of this
definition, we present a program analysis technique that detects instances of
proxy use in a model, and provides a witness that identifies which parts of the
corresponding program exhibit the behavior. Recognizing that not all instances
of proxy use of a protected information type are inappropriate, we make use of
a normative judgment oracle that makes this inappropriateness determination for
a given witness. Our repair algorithm uses the witness of an inappropriate
proxy use to transform the model into one that provably does not exhibit proxy
use, while avoiding changes that unduly affect classification accuracy. Using a
corpus of social datasets, our evaluation shows that these algorithms are able
to detect proxy use instances that would be difficult to find using existing
techniques, and subsequently remove them while maintaining acceptable
classification performance.
| [
{
"created": "Mon, 22 May 2017 15:28:43 GMT",
"version": "v1"
},
{
"created": "Wed, 24 May 2017 03:46:13 GMT",
"version": "v2"
},
{
"created": "Thu, 7 Sep 2017 06:36:33 GMT",
"version": "v3"
}
] | 2017-09-08 | [
[
"Datta",
"Anupam",
""
],
[
"Fredrikson",
"Matthew",
""
],
[
"Ko",
"Gihyuk",
""
],
[
"Mardziel",
"Piotr",
""
],
[
"Sen",
"Shayak",
""
]
] | This paper presents an approach to formalizing and enforcing a class of use privacy properties in data-driven systems. In contrast to prior work, we focus on use restrictions on proxies (i.e. strong predictors) of protected information types. Our definition relates proxy use to intermediate computations that occur in a program, and identify two essential properties that characterize this behavior: 1) its result is strongly associated with the protected information type in question, and 2) it is likely to causally affect the final output of the program. For a specific instantiation of this definition, we present a program analysis technique that detects instances of proxy use in a model, and provides a witness that identifies which parts of the corresponding program exhibit the behavior. Recognizing that not all instances of proxy use of a protected information type are inappropriate, we make use of a normative judgment oracle that makes this inappropriateness determination for a given witness. Our repair algorithm uses the witness of an inappropriate proxy use to transform the model into one that provably does not exhibit proxy use, while avoiding changes that unduly affect classification accuracy. Using a corpus of social datasets, our evaluation shows that these algorithms are able to detect proxy use instances that would be difficult to find using existing techniques, and subsequently remove them while maintaining acceptable classification performance. |
2405.17329 | Wei Xu | Yaqiong Zhao, Jindan Xu, Wei Xu,Kezhi Wang, Xinquan Ye, Chau Yuen and
Xiaohu You | Joint MIMO Transceiver and Reflector Design for Reconfigurable
Intelligent Surface-Assisted Communication | 14 pages, 12 figures | null | null | null | cs.IT eess.SP math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we consider a reconfigurable intelligent surface
(RIS)-assisted multiple-input multiple-output communication system with
multiple antennas at both the base station (BS) and the user. We plan to
maximize the achievable rate through jointly optimizing the transmit precoding
matrix, the receive combining matrix, and the RIS reflection matrix under the
constraints of the transmit power at the BS and the unit-modulus reflection at
the RIS. Regarding the non-trivial problem form, we initially reformulate it
into an considerable problem to make it tractable by utilizing the relationship
between the achievable rate and the weighted minimum mean squared error. Next,
the transmit precoding matrix, the receive combining matrix, and the RIS
reflection matrix are alternately optimized. In particular, the optimal
transmit precoding matrix and receive combining matrix are obtained in closed
forms. Furthermore, a pair of computationally efficient methods are proposed
for the RIS reflection matrix, namely the semi-definite relaxation (SDR) method
and the successive closed form (SCF) method. We theoretically prove that both
methods are ensured to converge, and the SCF-based algorithm is able to
converges to a Karush-Kuhn-Tucker point of the problem.
| [
{
"created": "Mon, 27 May 2024 16:28:37 GMT",
"version": "v1"
}
] | 2024-05-28 | [
[
"Zhao",
"Yaqiong",
""
],
[
"Xu",
"Jindan",
""
],
[
"Xu",
"Wei",
""
],
[
"Wang",
"Kezhi",
""
],
[
"Ye",
"Xinquan",
""
],
[
"Yuen",
"Chau",
""
],
[
"You",
"Xiaohu",
""
]
] | In this paper, we consider a reconfigurable intelligent surface (RIS)-assisted multiple-input multiple-output communication system with multiple antennas at both the base station (BS) and the user. We plan to maximize the achievable rate through jointly optimizing the transmit precoding matrix, the receive combining matrix, and the RIS reflection matrix under the constraints of the transmit power at the BS and the unit-modulus reflection at the RIS. Regarding the non-trivial problem form, we initially reformulate it into an considerable problem to make it tractable by utilizing the relationship between the achievable rate and the weighted minimum mean squared error. Next, the transmit precoding matrix, the receive combining matrix, and the RIS reflection matrix are alternately optimized. In particular, the optimal transmit precoding matrix and receive combining matrix are obtained in closed forms. Furthermore, a pair of computationally efficient methods are proposed for the RIS reflection matrix, namely the semi-definite relaxation (SDR) method and the successive closed form (SCF) method. We theoretically prove that both methods are ensured to converge, and the SCF-based algorithm is able to converges to a Karush-Kuhn-Tucker point of the problem. |
1312.7570 | Stefan Mathe | Stefan Mathe, Cristian Sminchisescu | Actions in the Eye: Dynamic Gaze Datasets and Learnt Saliency Models for
Visual Recognition | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Systems based on bag-of-words models from image features collected at maxima
of sparse interest point operators have been used successfully for both
computer visual object and action recognition tasks. While the sparse,
interest-point based approach to recognition is not inconsistent with visual
processing in biological systems that operate in `saccade and fixate' regimes,
the methodology and emphasis in the human and the computer vision communities
remains sharply distinct. Here, we make three contributions aiming to bridge
this gap. First, we complement existing state-of-the art large scale dynamic
computer vision annotated datasets like Hollywood-2 and UCF Sports with human
eye movements collected under the ecological constraints of the visual action
recognition task. To our knowledge these are the first large human eye tracking
datasets to be collected and made publicly available for video,
vision.imar.ro/eyetracking (497,107 frames, each viewed by 16 subjects), unique
in terms of their (a) large scale and computer vision relevance, (b) dynamic,
video stimuli, (c) task control, as opposed to free-viewing. Second, we
introduce novel sequential consistency and alignment measures, which underline
the remarkable stability of patterns of visual search among subjects. Third, we
leverage the significant amount of collected data in order to pursue studies
and build automatic, end-to-end trainable computer vision systems based on
human eye movements. Our studies not only shed light on the differences between
computer vision spatio-temporal interest point image sampling strategies and
the human fixations, as well as their impact for visual recognition
performance, but also demonstrate that human fixations can be accurately
predicted, and when used in an end-to-end automatic system, leveraging some of
the advanced computer vision practice, can lead to state of the art results.
| [
{
"created": "Sun, 29 Dec 2013 18:49:04 GMT",
"version": "v1"
}
] | 2013-12-31 | [
[
"Mathe",
"Stefan",
""
],
[
"Sminchisescu",
"Cristian",
""
]
] | Systems based on bag-of-words models from image features collected at maxima of sparse interest point operators have been used successfully for both computer visual object and action recognition tasks. While the sparse, interest-point based approach to recognition is not inconsistent with visual processing in biological systems that operate in `saccade and fixate' regimes, the methodology and emphasis in the human and the computer vision communities remains sharply distinct. Here, we make three contributions aiming to bridge this gap. First, we complement existing state-of-the art large scale dynamic computer vision annotated datasets like Hollywood-2 and UCF Sports with human eye movements collected under the ecological constraints of the visual action recognition task. To our knowledge these are the first large human eye tracking datasets to be collected and made publicly available for video, vision.imar.ro/eyetracking (497,107 frames, each viewed by 16 subjects), unique in terms of their (a) large scale and computer vision relevance, (b) dynamic, video stimuli, (c) task control, as opposed to free-viewing. Second, we introduce novel sequential consistency and alignment measures, which underline the remarkable stability of patterns of visual search among subjects. Third, we leverage the significant amount of collected data in order to pursue studies and build automatic, end-to-end trainable computer vision systems based on human eye movements. Our studies not only shed light on the differences between computer vision spatio-temporal interest point image sampling strategies and the human fixations, as well as their impact for visual recognition performance, but also demonstrate that human fixations can be accurately predicted, and when used in an end-to-end automatic system, leveraging some of the advanced computer vision practice, can lead to state of the art results. |
2106.14274 | Zhiqin Chen | Zhiqin Chen, Andrea Tagliasacchi, Hao Zhang | Learning Mesh Representations via Binary Space Partitioning Tree
Networks | Accepted to TPAMI. This is the extended journal version of BSP-Net
(arXiv:1911.06971) from CVPR 2020 | null | 10.1109/TPAMI.2021.3093440 | null | cs.CV cs.GR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Polygonal meshes are ubiquitous, but have only played a relatively minor role
in the deep learning revolution. State-of-the-art neural generative models for
3D shapes learn implicit functions and generate meshes via expensive
iso-surfacing. We overcome these challenges by employing a classical spatial
data structure from computer graphics, Binary Space Partitioning (BSP), to
facilitate 3D learning. The core operation of BSP involves recursive
subdivision of 3D space to obtain convex sets. By exploiting this property, we
devise BSP-Net, a network that learns to represent a 3D shape via convex
decomposition without supervision. The network is trained to reconstruct a
shape using a set of convexes obtained from a BSP-tree built over a set of
planes, where the planes and convexes are both defined by learned network
weights. BSP-Net directly outputs polygonal meshes from the inferred convexes.
The generated meshes are watertight, compact (i.e., low-poly), and well suited
to represent sharp geometry. We show that the reconstruction quality by BSP-Net
is competitive with those from state-of-the-art methods while using much fewer
primitives. We also explore variations to BSP-Net including using a more
generic decoder for reconstruction, more general primitives than planes, as
well as training a generative model with variational auto-encoders. Code is
available at https://github.com/czq142857/BSP-NET-original.
| [
{
"created": "Sun, 27 Jun 2021 16:37:54 GMT",
"version": "v1"
},
{
"created": "Fri, 2 Jul 2021 00:24:22 GMT",
"version": "v2"
}
] | 2021-07-05 | [
[
"Chen",
"Zhiqin",
""
],
[
"Tagliasacchi",
"Andrea",
""
],
[
"Zhang",
"Hao",
""
]
] | Polygonal meshes are ubiquitous, but have only played a relatively minor role in the deep learning revolution. State-of-the-art neural generative models for 3D shapes learn implicit functions and generate meshes via expensive iso-surfacing. We overcome these challenges by employing a classical spatial data structure from computer graphics, Binary Space Partitioning (BSP), to facilitate 3D learning. The core operation of BSP involves recursive subdivision of 3D space to obtain convex sets. By exploiting this property, we devise BSP-Net, a network that learns to represent a 3D shape via convex decomposition without supervision. The network is trained to reconstruct a shape using a set of convexes obtained from a BSP-tree built over a set of planes, where the planes and convexes are both defined by learned network weights. BSP-Net directly outputs polygonal meshes from the inferred convexes. The generated meshes are watertight, compact (i.e., low-poly), and well suited to represent sharp geometry. We show that the reconstruction quality by BSP-Net is competitive with those from state-of-the-art methods while using much fewer primitives. We also explore variations to BSP-Net including using a more generic decoder for reconstruction, more general primitives than planes, as well as training a generative model with variational auto-encoders. Code is available at https://github.com/czq142857/BSP-NET-original. |
2301.11899 | Shvetank Prakash | Shvetank Prakash, Matthew Stewart, Colby Banbury, Mark Mazumder, Pete
Warden, Brian Plancher, Vijay Janapa Reddi | Is TinyML Sustainable? Assessing the Environmental Impacts of Machine
Learning on Microcontrollers | Communications of the ACM (CACM) November 2023 Issue | null | null | null | cs.LG cs.AR cs.CY | http://creativecommons.org/licenses/by/4.0/ | The sustained growth of carbon emissions and global waste elicits significant
sustainability concerns for our environment's future. The growing Internet of
Things (IoT) has the potential to exacerbate this issue. However, an emerging
area known as Tiny Machine Learning (TinyML) has the opportunity to help
address these environmental challenges through sustainable computing practices.
TinyML, the deployment of machine learning (ML) algorithms onto low-cost,
low-power microcontroller systems, enables on-device sensor analytics that
unlocks numerous always-on ML applications. This article discusses both the
potential of these TinyML applications to address critical sustainability
challenges, as well as the environmental footprint of this emerging technology.
Through a complete life cycle analysis (LCA), we find that TinyML systems
present opportunities to offset their carbon emissions by enabling applications
that reduce the emissions of other sectors. Nevertheless, when globally scaled,
the carbon footprint of TinyML systems is not negligible, necessitating that
designers factor in environmental impact when formulating new devices. Finally,
we outline research directions to enable further sustainable contributions of
TinyML.
| [
{
"created": "Fri, 27 Jan 2023 18:23:10 GMT",
"version": "v1"
},
{
"created": "Fri, 19 May 2023 17:54:49 GMT",
"version": "v2"
},
{
"created": "Tue, 21 Nov 2023 11:24:29 GMT",
"version": "v3"
}
] | 2023-11-22 | [
[
"Prakash",
"Shvetank",
""
],
[
"Stewart",
"Matthew",
""
],
[
"Banbury",
"Colby",
""
],
[
"Mazumder",
"Mark",
""
],
[
"Warden",
"Pete",
""
],
[
"Plancher",
"Brian",
""
],
[
"Reddi",
"Vijay Janapa",
""
]
] | The sustained growth of carbon emissions and global waste elicits significant sustainability concerns for our environment's future. The growing Internet of Things (IoT) has the potential to exacerbate this issue. However, an emerging area known as Tiny Machine Learning (TinyML) has the opportunity to help address these environmental challenges through sustainable computing practices. TinyML, the deployment of machine learning (ML) algorithms onto low-cost, low-power microcontroller systems, enables on-device sensor analytics that unlocks numerous always-on ML applications. This article discusses both the potential of these TinyML applications to address critical sustainability challenges, as well as the environmental footprint of this emerging technology. Through a complete life cycle analysis (LCA), we find that TinyML systems present opportunities to offset their carbon emissions by enabling applications that reduce the emissions of other sectors. Nevertheless, when globally scaled, the carbon footprint of TinyML systems is not negligible, necessitating that designers factor in environmental impact when formulating new devices. Finally, we outline research directions to enable further sustainable contributions of TinyML. |
2312.13558 | Dipendra Misra | Pratyusha Sharma, Jordan T. Ash and Dipendra Misra | The Truth is in There: Improving Reasoning in Language Models with
Layer-Selective Rank Reduction | null | null | null | null | cs.LG cs.AI cs.CL cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Transformer-based Large Language Models (LLMs) have become a fixture in
modern machine learning. Correspondingly, significant resources are allocated
towards research that aims to further advance this technology, typically
resulting in models of increasing size that are trained on increasing amounts
of data. This work, however, demonstrates the surprising result that it is
often possible to significantly improve the performance of LLMs by selectively
removing higher-order components of their weight matrices. This simple
intervention, which we call LAyer-SElective Rank reduction (LASER), can be done
on a model after training has completed, and requires no additional parameters
or data. We show extensive experiments demonstrating the generality of this
finding across language models and datasets, and provide in-depth analyses
offering insights into both when LASER is effective and the mechanism by which
it operates.
| [
{
"created": "Thu, 21 Dec 2023 03:51:08 GMT",
"version": "v1"
}
] | 2023-12-22 | [
[
"Sharma",
"Pratyusha",
""
],
[
"Ash",
"Jordan T.",
""
],
[
"Misra",
"Dipendra",
""
]
] | Transformer-based Large Language Models (LLMs) have become a fixture in modern machine learning. Correspondingly, significant resources are allocated towards research that aims to further advance this technology, typically resulting in models of increasing size that are trained on increasing amounts of data. This work, however, demonstrates the surprising result that it is often possible to significantly improve the performance of LLMs by selectively removing higher-order components of their weight matrices. This simple intervention, which we call LAyer-SElective Rank reduction (LASER), can be done on a model after training has completed, and requires no additional parameters or data. We show extensive experiments demonstrating the generality of this finding across language models and datasets, and provide in-depth analyses offering insights into both when LASER is effective and the mechanism by which it operates. |
2112.13023 | Miroslav Fil | Miroslav Fil, Binxin Ru, Clare Lyle, Yarin Gal | DARTS without a Validation Set: Optimizing the Marginal Likelihood | Presented at the 5th Workshop on Meta-Learning at NeurIPS 2021 | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The success of neural architecture search (NAS) has historically been limited
by excessive compute requirements. While modern weight-sharing NAS methods such
as DARTS are able to finish the search in single-digit GPU days, extracting the
final best architecture from the shared weights is notoriously unreliable.
Training-Speed-Estimate (TSE), a recently developed generalization estimator
with a Bayesian marginal likelihood interpretation, has previously been used in
place of the validation loss for gradient-based optimization in DARTS. This
prevents the DARTS skip connection collapse, which significantly improves
performance on NASBench-201 and the original DARTS search space. We extend
those results by applying various DARTS diagnostics and show several unusual
behaviors arising from not using a validation set. Furthermore, our experiments
yield concrete examples of the depth gap and topology selection in DARTS having
a strongly negative impact on the search performance despite generally
receiving limited attention in the literature compared to the operations
selection.
| [
{
"created": "Fri, 24 Dec 2021 10:16:38 GMT",
"version": "v1"
}
] | 2021-12-28 | [
[
"Fil",
"Miroslav",
""
],
[
"Ru",
"Binxin",
""
],
[
"Lyle",
"Clare",
""
],
[
"Gal",
"Yarin",
""
]
] | The success of neural architecture search (NAS) has historically been limited by excessive compute requirements. While modern weight-sharing NAS methods such as DARTS are able to finish the search in single-digit GPU days, extracting the final best architecture from the shared weights is notoriously unreliable. Training-Speed-Estimate (TSE), a recently developed generalization estimator with a Bayesian marginal likelihood interpretation, has previously been used in place of the validation loss for gradient-based optimization in DARTS. This prevents the DARTS skip connection collapse, which significantly improves performance on NASBench-201 and the original DARTS search space. We extend those results by applying various DARTS diagnostics and show several unusual behaviors arising from not using a validation set. Furthermore, our experiments yield concrete examples of the depth gap and topology selection in DARTS having a strongly negative impact on the search performance despite generally receiving limited attention in the literature compared to the operations selection. |
1711.06841 | Eli (Omid) David | Eli David, Moshe Koppel, Nathan S. Netanyahu | Expert-Driven Genetic Algorithms for Simulating Evaluation Functions | arXiv admin note: substantial text overlap with arXiv:1711.06839,
arXiv:1711.06840 | Genetic Programming and Evolvable Machines, Vol. 12, No. 1, pp.
5-22, March 2011 | 10.1007/s10710-010-9103-4 | null | cs.NE cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we demonstrate how genetic algorithms can be used to reverse
engineer an evaluation function's parameters for computer chess. Our results
show that using an appropriate expert (or mentor), we can evolve a program that
is on par with top tournament-playing chess programs, outperforming a two-time
World Computer Chess Champion. This performance gain is achieved by evolving a
program that mimics the behavior of a superior expert. The resulting evaluation
function of the evolved program consists of a much smaller number of parameters
than the expert's. The extended experimental results provided in this paper
include a report of our successful participation in the 2008 World Computer
Chess Championship. In principle, our expert-driven approach could be used in a
wide range of problems for which appropriate experts are available.
| [
{
"created": "Sat, 18 Nov 2017 10:22:49 GMT",
"version": "v1"
}
] | 2017-11-21 | [
[
"David",
"Eli",
""
],
[
"Koppel",
"Moshe",
""
],
[
"Netanyahu",
"Nathan S.",
""
]
] | In this paper we demonstrate how genetic algorithms can be used to reverse engineer an evaluation function's parameters for computer chess. Our results show that using an appropriate expert (or mentor), we can evolve a program that is on par with top tournament-playing chess programs, outperforming a two-time World Computer Chess Champion. This performance gain is achieved by evolving a program that mimics the behavior of a superior expert. The resulting evaluation function of the evolved program consists of a much smaller number of parameters than the expert's. The extended experimental results provided in this paper include a report of our successful participation in the 2008 World Computer Chess Championship. In principle, our expert-driven approach could be used in a wide range of problems for which appropriate experts are available. |
1409.8580 | Udo Schilcher | Udo Schilcher, Stavros Toumpis, Martin Haenggi, Alessandro Crismani,
G\"unther Brandner, Christian Bettstetter | Interference Functionals in Poisson Networks | null | IEEE Transactions on Information Theory, vol. 62, no. 1, pp.
370-383, Jan. 2016 | 10.1109/TIT.2015.2501799 | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose and prove a theorem that allows the calculation of a class of
functionals on Poisson point processes that have the form of expected values of
sum-products of functions. In proving the theorem, we present a variant of the
Campbell-Mecke theorem from stochastic geometry. We proceed to apply our result
in the calculation of expected values involving interference in wireless
Poisson networks. Based on this, we derive outage probabilities for
transmissions in a Poisson network with Nakagami fading. Our results extend the
stochastic geometry toolbox used for the mathematical analysis of
interference-limited wireless networks.
| [
{
"created": "Tue, 30 Sep 2014 15:00:22 GMT",
"version": "v1"
},
{
"created": "Mon, 8 Jun 2015 09:35:03 GMT",
"version": "v2"
}
] | 2018-06-05 | [
[
"Schilcher",
"Udo",
""
],
[
"Toumpis",
"Stavros",
""
],
[
"Haenggi",
"Martin",
""
],
[
"Crismani",
"Alessandro",
""
],
[
"Brandner",
"Günther",
""
],
[
"Bettstetter",
"Christian",
""
]
] | We propose and prove a theorem that allows the calculation of a class of functionals on Poisson point processes that have the form of expected values of sum-products of functions. In proving the theorem, we present a variant of the Campbell-Mecke theorem from stochastic geometry. We proceed to apply our result in the calculation of expected values involving interference in wireless Poisson networks. Based on this, we derive outage probabilities for transmissions in a Poisson network with Nakagami fading. Our results extend the stochastic geometry toolbox used for the mathematical analysis of interference-limited wireless networks. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.