id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2502.14829
|
Measuring Faithfulness of Chains of Thought by Unlearning Reasoning
Steps
|
cs.CL
|
When prompted to think step-by-step, language models (LMs) produce a chain of
thought (CoT), a sequence of reasoning steps that the model supposedly used to
produce its prediction. However, despite much work on CoT prompting, it is
unclear if CoT reasoning is faithful to the models' parameteric beliefs. We
introduce a framework for measuring parametric faithfulness of generated
reasoning, and propose Faithfulness by Unlearning Reasoning steps (FUR), an
instance of this framework. FUR erases information contained in reasoning steps
from model parameters. We perform experiments unlearning CoTs of four LMs
prompted on four multi-choice question answering (MCQA) datasets. Our
experiments show that FUR is frequently able to change the underlying models'
prediction by unlearning key steps, indicating when a CoT is parametrically
faithful. Further analysis shows that CoTs generated by models post-unlearning
support different answers, hinting at a deeper effect of unlearning.
Importantly, CoT steps identified as important by FUR do not align well with
human notions of plausbility, emphasizing the need for specialized alignment
|
2502.14830
|
Middle-Layer Representation Alignment for Cross-Lingual Transfer in
Fine-Tuned LLMs
|
cs.CL cs.AI
|
While large language models demonstrate remarkable capabilities at
task-specific applications through fine-tuning, extending these benefits across
diverse languages is essential for broad accessibility. However, effective
cross-lingual transfer is hindered by LLM performance gaps across languages and
the scarcity of fine-tuning data in many languages. Through analysis of LLM
internal representations from over 1,000+ language pairs, we discover that
middle layers exhibit the strongest potential for cross-lingual alignment.
Building on this finding, we propose a middle-layer alignment objective
integrated into task-specific training. Our experiments on slot filling,
machine translation, and structured text generation show consistent
improvements in cross-lingual transfer, especially to lower-resource languages.
The method is robust to the choice of alignment languages and generalizes to
languages unseen during alignment. Furthermore, we show that separately trained
alignment modules can be merged with existing task-specific modules, improving
cross-lingual capabilities without full re-training. Our code is publicly
available (https://github.com/dannigt/mid-align).
|
2502.14831
|
Improving the Diffusability of Autoencoders
|
cs.CV cs.AI cs.LG
|
Latent diffusion models have emerged as the leading approach for generating
high-quality images and videos, utilizing compressed latent representations to
reduce the computational burden of the diffusion process. While recent
advancements have primarily focused on scaling diffusion backbones and
improving autoencoder reconstruction quality, the interaction between these
components has received comparatively less attention. In this work, we perform
a spectral analysis of modern autoencoders and identify inordinate
high-frequency components in their latent spaces, which are especially
pronounced in the autoencoders with a large bottleneck channel size. We
hypothesize that this high-frequency component interferes with the
coarse-to-fine nature of the diffusion synthesis process and hinders the
generation quality. To mitigate the issue, we propose scale equivariance: a
simple regularization strategy that aligns latent and RGB spaces across
frequencies by enforcing scale equivariance in the decoder. It requires minimal
code changes and only up to 20K autoencoder fine-tuning steps, yet
significantly improves generation quality, reducing FID by 19% for image
generation on ImageNet-1K 256x256 and FVD by at least 44% for video generation
on Kinetics-700 17x256x256.
|
2502.14833
|
Probabilistic Robustness in Deep Learning: A Concise yet Comprehensive
Guide
|
cs.LG
|
Deep learning (DL) has demonstrated significant potential across various
safety-critical applications, yet ensuring its robustness remains a key
challenge. While adversarial robustness has been extensively studied in
worst-case scenarios, probabilistic robustness (PR) offers a more practical
perspective by quantifying the likelihood of failures under stochastic
perturbations. This paper provides a concise yet comprehensive overview of PR,
covering its formal definitions, evaluation and enhancement methods. We
introduce a reformulated ''min-max'' optimisation framework for adversarial
training specifically designed to improve PR. Furthermore, we explore the
integration of PR verification evidence into system-level safety assurance,
addressing challenges in translating DL model-level robustness to system-level
claims. Finally, we highlight open research questions, including benchmarking
PR evaluation methods, extending PR to generative AI tasks, and developing
rigorous methodologies and case studies for system-level integration.
|
2502.14834
|
LongWriter-V: Enabling Ultra-Long and High-Fidelity Generation in
Vision-Language Models
|
cs.CV cs.AI cs.CL
|
Existing Large Vision-Language Models (LVLMs) can process inputs with context
lengths up to 128k visual and text tokens, yet they struggle to generate
coherent outputs beyond 1,000 words. We find that the primary limitation is the
absence of long output examples during supervised fine-tuning (SFT). To tackle
this issue, we introduce LongWriter-V-22k, a SFT dataset comprising 22,158
examples, each with multiple input images, an instruction, and corresponding
outputs ranging from 0 to 10,000 words. Moreover, to achieve long outputs that
maintain high-fidelity to the input images, we employ Direct Preference
Optimization (DPO) to the SFT model. Given the high cost of collecting human
feedback for lengthy outputs (e.g., 3,000 words), we propose IterDPO, which
breaks long outputs into segments and uses iterative corrections to form
preference pairs with the original outputs. Additionally, we develop
MMLongBench-Write, a benchmark featuring six tasks to evaluate the
long-generation capabilities of VLMs. Our 7B parameter model, trained with
LongWriter-V-22k and IterDPO, achieves impressive performance on this
benchmark, outperforming larger proprietary models like GPT-4o. Code and data:
https://github.com/THU-KEG/LongWriter-V
|
2502.14837
|
Towards Economical Inference: Enabling DeepSeek's Multi-Head Latent
Attention in Any Transformer-based LLMs
|
cs.CL cs.AI
|
Multi-head Latent Attention (MLA) is an innovative architecture proposed by
DeepSeek, designed to ensure efficient and economical inference by
significantly compressing the Key-Value (KV) cache into a latent vector.
Compared to MLA, standard LLMs employing Multi-Head Attention (MHA) and its
variants such as Grouped-Query Attention (GQA) exhibit significant cost
disadvantages. Enabling well-trained LLMs (e.g., Llama) to rapidly adapt to MLA
without pre-training from scratch is both meaningful and challenging. This
paper proposes the first data-efficient fine-tuning method for transitioning
from MHA to MLA (MHA2MLA), which includes two key components: for partial-RoPE,
we remove RoPE from dimensions of queries and keys that contribute less to the
attention scores, for low-rank approximation, we introduce joint SVD
approximations based on the pre-trained parameters of keys and values. These
carefully designed strategies enable MHA2MLA to recover performance using only
a small fraction (0.3% to 0.6%) of the data, significantly reducing inference
costs while seamlessly integrating with compression techniques such as KV cache
quantization. For example, the KV cache size of Llama2-7B is reduced by 92.19%,
with only a 0.5% drop in LongBench performance.
|
2502.14838
|
Revealing and Mitigating Over-Attention in Knowledge Editing
|
cs.CL cs.AI
|
Large Language Models have demonstrated superior performance across a wide
range of tasks, but they still exhibit undesirable errors due to incorrect
knowledge learned from the training data. To avoid this, knowledge editing
methods emerged to precisely edit the specific model knowledge via efficiently
modifying a very small percentage of parameters. % However, those methods can
lead to the problem of Specificity Failure: when the content related to the
edited knowledge occurs in the context, it can inadvertently corrupt other
pre-existing knowledge. However, those methods can lead to the problem of
Specificity Failure, where the existing knowledge and capabilities are severely
degraded due to editing. Our preliminary indicates that Specificity Failure
primarily stems from the model's attention heads assigning excessive attention
scores to entities related to the edited knowledge, thereby unduly focusing on
specific snippets within the context, which we denote as the Attention Drift
phenomenon. To mitigate such Attention Drift issue, we introduce a simple yet
effective method Selective Attention Drift Restriction}(SADR), which introduces
an additional regularization term during the knowledge editing process to
restrict changes in the attention weight distribution, thereby preventing undue
focus on the edited entity. Experiments on five frequently used strong LLMs
demonstrate the effectiveness of our method, where SADR can significantly
mitigate Specificity Failure in the predominant knowledge editing tasks.
|
2502.14840
|
Spatial Distribution-Shift Aware Knowledge-Guided Machine Learning
|
cs.LG
|
Given inputs of diverse soil characteristics and climate data gathered from
various regions, we aimed to build a model to predict accurate land emissions.
The problem is important since accurate quantification of the carbon cycle in
agroecosystems is crucial for mitigating climate change and ensuring
sustainable food production. Predicting accurate land emissions is challenging
since calibrating the heterogeneous nature of soil properties, moisture, and
environmental conditions is hard at decision-relevant scales. Traditional
approaches do not adequately estimate land emissions due to
location-independent parameters failing to leverage the spatial heterogeneity
and also require large datasets. To overcome these limitations, we proposed
Spatial Distribution-Shift Aware Knowledge-Guided Machine Learning (SDSA-KGML),
which leverages location-dependent parameters that account for significant
spatial heterogeneity in soil moisture from multiple sites within the same
region. Experimental results demonstrate that SDSA-KGML models achieve higher
local accuracy for the specified states in the Midwest Region.
|
2502.14842
|
Generating $\pi$-Functional Molecules Using STGG+ with Active Learning
|
cs.LG
|
Generating novel molecules with out-of-distribution properties is a major
challenge in molecular discovery. While supervised learning methods generate
high-quality molecules similar to those in a dataset, they struggle to
generalize to out-of-distribution properties. Reinforcement learning can
explore new chemical spaces but often conducts 'reward-hacking' and generates
non-synthesizable molecules. In this work, we address this problem by
integrating a state-of-the-art supervised learning method, STGG+, in an active
learning loop. Our approach iteratively generates, evaluates, and fine-tunes
STGG+ to continuously expand its knowledge. We denote this approach STGG+AL. We
apply STGG+AL to the design of organic $\pi$-functional materials, specifically
two challenging tasks: 1) generating highly absorptive molecules characterized
by high oscillator strength and 2) designing absorptive molecules with
reasonable oscillator strength in the near-infrared (NIR) range. The generated
molecules are validated and rationalized in-silico with time-dependent density
functional theory. Our results demonstrate that our method is highly effective
in generating novel molecules with high oscillator strength, contrary to
existing methods such as reinforcement learning (RL) methods. We open-source
our active-learning code along with our Conjugated-xTB dataset containing 2.9
million $\pi$-conjugated molecules and the function for approximating the
oscillator strength and absorption wavelength (based on sTDA-xTB).
|
2502.14844
|
Dynamic Concepts Personalization from Single Videos
|
cs.GR cs.CV cs.LG
|
Personalizing generative text-to-image models has seen remarkable progress,
but extending this personalization to text-to-video models presents unique
challenges. Unlike static concepts, personalizing text-to-video models has the
potential to capture dynamic concepts, i.e., entities defined not only by their
appearance but also by their motion. In this paper, we introduce
Set-and-Sequence, a novel framework for personalizing Diffusion Transformers
(DiTs)-based generative video models with dynamic concepts. Our approach
imposes a spatio-temporal weight space within an architecture that does not
explicitly separate spatial and temporal features. This is achieved in two key
stages. First, we fine-tune Low-Rank Adaptation (LoRA) layers using an
unordered set of frames from the video to learn an identity LoRA basis that
represents the appearance, free from temporal interference. In the second
stage, with the identity LoRAs frozen, we augment their coefficients with
Motion Residuals and fine-tune them on the full video sequence, capturing
motion dynamics. Our Set-and-Sequence framework results in a spatio-temporal
weight space that effectively embeds dynamic concepts into the video model's
output domain, enabling unprecedented editability and compositionality while
setting a new benchmark for personalizing dynamic concepts.
|
2502.14846
|
Scaling Text-Rich Image Understanding via Code-Guided Synthetic
Multimodal Data Generation
|
cs.CV cs.CL
|
Reasoning about images with rich text, such as charts and documents, is a
critical application of vision-language models (VLMs). However, VLMs often
struggle in these domains due to the scarcity of diverse text-rich
vision-language data. To address this challenge, we present CoSyn, a framework
that leverages the coding capabilities of text-only large language models
(LLMs) to automatically create synthetic text-rich multimodal data. Given input
text describing a target domain (e.g., "nutrition fact labels"), CoSyn prompts
an LLM to generate code (Python, HTML, LaTeX, etc.) for rendering synthetic
images. With the underlying code as textual representations of the synthetic
images, CoSyn can generate high-quality instruction-tuning data, again relying
on a text-only LLM. Using CoSyn, we constructed a dataset comprising 400K
images and 2.7M rows of vision-language instruction-tuning data. Comprehensive
experiments on seven benchmarks demonstrate that models trained on our
synthetic data achieve state-of-the-art performance among competitive
open-source models, including Llama 3.2, and surpass proprietary models such as
GPT-4V and Gemini 1.5 Flash. Furthermore, CoSyn can produce synthetic pointing
data, enabling VLMs to ground information within input images, showcasing its
potential for developing multimodal agents capable of acting in real-world
environments.
|
2502.14848
|
GATE: Graph-based Adaptive Tool Evolution Across Diverse Tasks
|
cs.CL
|
Large Language Models (LLMs) have shown great promise in tool-making, yet
existing frameworks often struggle to efficiently construct reliable toolsets
and are limited to single-task settings. To address these challenges, we
propose GATE (Graph-based Adaptive Tool Evolution), an adaptive framework that
dynamically constructs and evolves a hierarchical graph of reusable tools
across multiple scenarios. We evaluate GATE on open-ended tasks (Minecraft),
agent-based tasks (TextCraft, DABench), and code generation tasks (MATH, Date,
TabMWP). Our results show that GATE achieves up to 4.3x faster milestone
completion in Minecraft compared to the previous SOTA, and provides an average
improvement of 9.23% over existing tool-making methods in code generation tasks
and 10.03% in agent tasks. GATE demonstrates the power of adaptive evolution,
balancing tool quantity, complexity, and functionality while maintaining high
efficiency. Code and data are available at
\url{https://github.com/ayanami2003/GATE}.
|
2502.14853
|
On the $H$-property for Step-graphons: Residual Case
|
eess.SY cs.SY
|
We sample graphs $G_n$ on $n$ nodes from a step-graphon and evaluate the
probability that $G_n$ has a Hamiltonian decomposition in the asymptotic regime
as $n\to\infty$. It has recently been shown that for almost all step-graphons,
this probability converges to either zero or one. In this paper, we focus on
the class of step-graphons such that the zero-one property does not hold. We
show in this case that the limit of the probability still exists and provide an
explicit expression of it.
|
2502.14854
|
CLIPPER: Compression enables long-context synthetic data generation
|
cs.CL
|
LLM developers are increasingly reliant on synthetic data, but generating
high-quality data for complex long-context reasoning tasks remains challenging.
We introduce CLIPPER, a compression-based approach for generating synthetic
data tailored to narrative claim verification - a task that requires reasoning
over a book to verify a given claim. Instead of generating claims directly from
the raw text of the book, which results in artifact-riddled claims, CLIPPER
first compresses the book into chapter outlines and book summaries and then
uses these intermediate representations to generate complex claims and
corresponding chain-of-thoughts. Compared to naive approaches, CLIPPER produces
claims that are more valid, grounded, and complex. Using CLIPPER, we construct
a dataset of 19K synthetic book claims paired with their source texts and
chain-of-thought reasoning, and use it to fine-tune three open-weight models.
Our best model achieves breakthrough results on narrative claim verification
(from 28% to 76% accuracy on our test set) and sets a new state-of-the-art for
sub-10B models on the NoCha leaderboard. Further analysis shows that our models
generate more detailed and grounded chain-of-thought reasoning while also
improving performance on other narrative understanding tasks (e.g.,
NarrativeQA).
|
2502.14855
|
Prompt-to-Leaderboard
|
cs.LG cs.CL
|
Large language model (LLM) evaluations typically rely on aggregated metrics
like accuracy or human preference, averaging across users and prompts. This
averaging obscures user- and prompt-specific variations in model performance.
To address this, we propose Prompt-to-Leaderboard (P2L), a method that produces
leaderboards specific to a prompt. The core idea is to train an LLM taking
natural language prompts as input to output a vector of Bradley-Terry
coefficients which are then used to predict the human preference vote. The
resulting prompt-dependent leaderboards allow for unsupervised task-specific
evaluation, optimal routing of queries to models, personalization, and
automated evaluation of model strengths and weaknesses. Data from Chatbot Arena
suggest that P2L better captures the nuanced landscape of language model
performance than the averaged leaderboard. Furthermore, our findings suggest
that P2L's ability to produce prompt-specific evaluations follows a power law
scaling similar to that observed in LLMs themselves. In January 2025, the
router we trained based on this methodology achieved the \#1 spot in the
Chatbot Arena leaderboard. Our code is available at this GitHub link:
https://github.com/lmarena/p2l.
|
2502.14856
|
FR-Spec: Accelerating Large-Vocabulary Language Models via
Frequency-Ranked Speculative Sampling
|
cs.CL cs.AI cs.LG
|
Speculative sampling has emerged as an important technique for accelerating
the auto-regressive generation process of large language models (LLMs) by
utilizing a draft-then-verify mechanism to produce multiple tokens per forward
pass. While state-of-the-art speculative sampling methods use only a single
layer and a language modeling (LM) head as the draft model to achieve
impressive layer compression, their efficiency gains are substantially reduced
for large-vocabulary LLMs, such as Llama-3-8B with a vocabulary of 128k tokens.
To address this, we present FR-Spec, a frequency-ranked speculative sampling
framework that optimizes draft candidate selection through vocabulary space
compression. By constraining the draft search to a frequency-prioritized token
subset, our method reduces LM Head computation overhead by 75% while ensuring
the equivalence of the final output distribution. Experiments across multiple
datasets demonstrate an average of 1.12$\times$ speedup over the
state-of-the-art speculative sampling method EAGLE-2.
|
2502.14860
|
Aligning LLMs to Ask Good Questions A Case Study in Clinical Reasoning
|
cs.CL
|
Large language models (LLMs) often fail to ask effective questions under
uncertainty, making them unreliable in domains where proactive
information-gathering is essential for decisionmaking. We present ALFA, a
framework that improves LLM question-asking by (i) decomposing the notion of a
"good" question into a set of theory-grounded attributes (e.g., clarity,
relevance), (ii) controllably synthesizing attribute-specific question
variations, and (iii) aligning models via preference-based optimization to
explicitly learn to ask better questions along these fine-grained attributes.
Focusing on clinical reasoning as a case study, we introduce the MediQ-AskDocs
dataset, composed of 17k real-world clinical interactions augmented with 80k
attribute-specific preference pairs of follow-up questions, as well as a novel
expert-annotated interactive healthcare QA task to evaluate question-asking
abilities. Models aligned with ALFA reduce diagnostic errors by 56.6% on
MediQ-AskDocs compared to SOTA instruction-tuned LLMs, with a question-level
win-rate of 64.4% and strong generalizability. Our findings suggest that
explicitly guiding question-asking with structured, fine-grained attributes
offers a scalable path to improve LLMs, especially in expert application
domains.
|
2502.14862
|
Interpretable Text Embeddings and Text Similarity Explanation: A Primer
|
cs.CL cs.AI cs.IR
|
Text embeddings and text embedding models are a backbone of many AI and NLP
systems, particularly those involving search. However, interpretability
challenges persist, especially in explaining obtained similarity scores, which
is crucial for applications requiring transparency. In this paper, we give a
structured overview of interpretability methods specializing in explaining
those similarity scores, an emerging research area. We study the methods'
individual ideas and techniques, evaluating their potential for improving
interpretability of text embeddings and explaining predicted similarities.
|
2502.14864
|
Benchmarking Multimodal RAG through a Chart-based Document
Question-Answering Generation Framework
|
cs.AI cs.CV
|
Multimodal Retrieval-Augmented Generation (MRAG) enhances reasoning
capabilities by integrating external knowledge. However, existing benchmarks
primarily focus on simple image-text interactions, overlooking complex visual
formats like charts that are prevalent in real-world applications. In this
work, we introduce a novel task, Chart-based MRAG, to address this limitation.
To semi-automatically generate high-quality evaluation samples, we propose
CHARt-based document question-answering GEneration (CHARGE), a framework that
produces evaluation data through structured keypoint extraction, crossmodal
verification, and keypoint-based generation. By combining CHARGE with expert
validation, we construct Chart-MRAG Bench, a comprehensive benchmark for
chart-based MRAG evaluation, featuring 4,738 question-answering pairs across 8
domains from real-world documents. Our evaluation reveals three critical
limitations in current approaches: (1) unified multimodal embedding retrieval
methods struggles in chart-based scenarios, (2) even with ground-truth
retrieval, state-of-the-art MLLMs achieve only 58.19% Correctness and 73.87%
Coverage scores, and (3) MLLMs demonstrate consistent text-over-visual modality
bias during Chart-based MRAG reasoning. The CHARGE and Chart-MRAG Bench are
released at https://github.com/Nomothings/CHARGE.git.
|
2502.14865
|
Time Travel: A Comprehensive Benchmark to Evaluate LMMs on Historical
and Cultural Artifacts
|
cs.CV cs.LG
|
Understanding historical and cultural artifacts demands human expertise and
advanced computational techniques, yet the process remains complex and
time-intensive. While large multimodal models offer promising support, their
evaluation and improvement require a standardized benchmark. To address this,
we introduce TimeTravel, a benchmark of 10,250 expert-verified samples spanning
266 distinct cultures across 10 major historical regions. Designed for
AI-driven analysis of manuscripts, artworks, inscriptions, and archaeological
discoveries, TimeTravel provides a structured dataset and robust evaluation
framework to assess AI models' capabilities in classification, interpretation,
and historical comprehension. By integrating AI with historical research,
TimeTravel fosters AI-powered tools for historians, archaeologists,
researchers, and cultural tourists to extract valuable insights while ensuring
technology contributes meaningfully to historical discovery and cultural
heritage preservation. We evaluate contemporary AI models on TimeTravel,
highlighting their strengths and identifying areas for improvement. Our goal is
to establish AI as a reliable partner in preserving cultural heritage, ensuring
that technological advancements contribute meaningfully to historical
discovery. Our code is available at:
\url{https://github.com/mbzuai-oryx/TimeTravel}.
|
2502.14866
|
LServe: Efficient Long-sequence LLM Serving with Unified Sparse
Attention
|
cs.CL cs.AI cs.DC cs.LG cs.PF
|
Large language models (LLMs) have shown remarkable potential in processing
long sequences, yet efficiently serving these long-context models remains
challenging due to the quadratic computational complexity of attention in the
prefilling stage and the large memory footprint of the KV cache in the decoding
stage. To address these issues, we introduce LServe, an efficient system that
accelerates long-sequence LLM serving via hybrid sparse attention. This method
unifies different hardware-friendly, structured sparsity patterns for both
prefilling and decoding attention into a single framework, where computations
on less important tokens are skipped block-wise. LServe demonstrates the
compatibility of static and dynamic sparsity in long-context LLM attention.
This design enables multiplicative speedups by combining these optimizations.
Specifically, we convert half of the attention heads to nearly free streaming
heads in both the prefilling and decoding stages. Additionally, we find that
only a constant number of KV pages is required to preserve long-context
capabilities, irrespective of context length. We then design a hierarchical KV
page selection policy that dynamically prunes KV pages based on query-centric
similarity. On average, LServe accelerates LLM prefilling by up to 2.9x and
decoding by 1.3-2.1x over vLLM, maintaining long-context accuracy. Code is
released at https://github.com/mit-han-lab/omniserve.
|
adap-org/9807003
|
Development and Evolution of Neural Networks in an Artificial Chemistry
|
adap-org cs.NE nlin.AO q-bio.PE
|
We present a model of decentralized growth for Artificial Neural Networks
(ANNs) inspired by the development and the physiology of real nervous systems.
In this model, each individual artificial neuron is an autonomous unit whose
behavior is determined only by the genetic information it harbors and local
concentrations of substrates modeled by a simple artificial chemistry. Gene
expression is manifested as axon and dendrite growth, cell division and
differentiation, substrate production and cell stimulation. We demonstrate the
model's power with a hand-written genome that leads to the growth of a simple
network which performs classical conditioning. To evolve more complex
structures, we implemented a platform-independent, asynchronous, distributed
Genetic Algorithm (GA) that allows users to participate in evolutionary
experiments via the World Wide Web.
|
adap-org/9903003
|
Evolution of genetic organization in digital organisms
|
adap-org cs.NE nlin.AO q-bio.PE
|
We examine the evolution of expression patterns and the organization of
genetic information in populations of self-replicating digital organisms.
Seeding the experiments with a linearly expressed ancestor, we witness the
development of complex, parallel secondary expression patterns. Using
principles from information theory, we demonstrate an evolutionary pressure
towards overlapping expressions causing variation (and hence further evolution)
to sharply drop. Finally, we compare the overlapping sections of dominant
genomes to those portions which are singly expressed and observe a significant
difference in the entropy of their encoding.
|
alg-geom/9608018
|
Rank Two Bundles on Algebraic Curves and Decoding of Goppa Codes
|
alg-geom cs.IT math.AG math.IT
|
We study a connection between two topics: Decoding of Goppa codes arising
from an algebraic curve, and rank two extensions of certain line bundles on the
curve.
|
astro-ph/0008307
|
Science User Scenarios for a Virtual Observatory Design Reference
Mission: Science Requirements for Data Mining
|
astro-ph cs.DB cs.DL cs.IR
|
The knowledge discovery potential of the new large astronomical databases is
vast. When these are used in conjunction with the rich legacy data archives,
the opportunities for scientific discovery multiply rapidly. A Virtual
Observatory (VO) framework will enable transparent and efficient access,
search, retrieval, and visualization of data across multiple data repositories,
which are generally heterogeneous and distributed. Aspects of data mining that
apply to a variety of science user scenarios with a VO are reviewed. The
development of a VO should address the data mining needs of various
astronomical research constituencies. By way of example, two user scenarios are
presented which invoke applications and linkages of data across the catalog and
image domains in order to address specific astrophysics research problems.
These illustrate a subset of the desired capabilities and power of the VO, and
as such they represent potential components of a VO Design Reference Mission.
|
astro-ph/0010583
|
Data Mining in Astronomical Databases
|
astro-ph cs.DB cs.DL cs.IR
|
A Virtual Observatory (VO) will enable transparent and efficient access,
search, retrieval, and visualization of data across multiple data repositories,
which are generally heterogeneous and distributed. Aspects of data mining that
apply to a variety of science user scenarios with a VO are reviewed.
|
astro-ph/0402591
|
Evolutionary design of photometric systems and its application to Gaia
|
astro-ph cs.NE stat.ML
|
Designing a photometric system to best fulfil a set of scientific goals is a
complex task, demanding a compromise between conflicting requirements and
subject to various constraints. A specific example is the determination of
stellar astrophysical parameters (APs) - effective temperature, metallicity
etc. - across a wide range of stellar types. I present a novel approach to this
problem which makes minimal assumptions about the required filter system. By
considering a filter system as a set of free parameters it may be designed by
optimizing some figure-of-merit (FoM) with respect to these parameters. In the
example considered, the FoM is a measure of how well the filter system can
`separate' stars with different APs. This separation is vectorial in nature, in
the sense that the local directions of AP variance are preferably mutually
orthogonal to avoid AP degeneracy. The optimization is carried out with an
evolutionary algorithm, which uses principles of evolutionary biology to search
the parameter space. This model, HFD (Heuristic Filter Design), is applied to
the design of photometric systems for the Gaia space astrometry mission. The
optimized systems show a number of interesting features, not least the
persistence of broad, overlapping filters. These HFD systems perform as least
as well as other proposed systems for Gaia, although inadequacies remain in
all. The principles underlying HFD are quite generic and may be applied to
filter design for numerous other projects, such as the search for specific
types of objects or photometric redshift determination.
|
astro-ph/0502164
|
Particle Swarm Optimization: An efficient method for tracing periodic
orbits in 3D galactic potentials
|
astro-ph cs.NA cs.NE math.NA nlin.CD
|
We propose the Particle Swarm Optimization (PSO) as an alternative method for
locating periodic orbits in a three--dimensional (3D) model of barred galaxies.
We develop an appropriate scheme that transforms the problem of finding
periodic orbits into the problem of detecting global minimizers of a function,
which is defined on the Poincar\'{e} Surface of Section (PSS) of the
Hamiltonian system. By combining the PSO method with deflection techniques, we
succeeded in tracing systematically several periodic orbits of the system. The
method succeeded in tracing the initial conditions of periodic orbits in cases
where Newton iterative techniques had difficulties. In particular, we found
families of 2D and 3D periodic orbits associated with the inner 8:1 to 12:1
resonances, between the radial 4:1 and corotation resonances of our 3D Ferrers
bar model. The main advantages of the proposed algorithm is its simplicity, its
ability to work using function values solely, as well as its ability to locate
many periodic orbits per run at a given Jacobian constant.
|
astro-ph/0504006
|
Virtual Observatory: From Concept to Implementation
|
astro-ph cs.CE
|
We review the origins of the Virtual Observatory (VO) concept, and the
current status of the efforts in this field. VO is the response of the
astronomical community to the challenges posed by the modern massive and
complex data sets. It is a framework in which information technology is
harnessed to organize, maintain, and explore the rich information content of
the exponentially growing data sets, and to enable a qualitatively new science
to be done with them. VO will become a complete, open, distributed, web-based
framework for astronomy of the early 21st century. A number of significant
efforts worldwide are now striving to convert this vision into reality. The
technological and methodological challenges posed by the information-rich
astronomy are also common to many other fields. We see a fundamental change in
the way all science is done, driven by the information technology revolution.
|
astro-ph/0506110
|
Galactic Gradients, Postbiological Evolution and the Apparent Failure of
SETI
|
astro-ph cs.AI physics.soc-ph
|
Motivated by recent developments impacting our view of Fermi's paradox
(absence of extraterrestrials and their manifestations from our past light
cone), we suggest a reassessment of the problem itself, as well as of
strategies employed by SETI projects so far. The need for such reevaluation is
fueled not only by the failure of searches thus far, but also by great advances
recently made in astrophysics, astrobiology, computer science and future
studies, which have remained largely ignored in SETI practice. As an example of
the new approach, we consider the effects of the observed metallicity and
temperature gradients in the Milky Way on the spatial distribution of
hypothetical advanced extraterrestrial intelligent communities. While,
obviously, properties of such communities and their sociological and
technological preferences are entirely unknown, we assume that (1) they operate
in agreement with the known laws of physics, and (2) that at some point they
typically become motivated by a meta-principle embodying the central role of
information-processing; a prototype of the latter is the recently suggested
Intelligence Principle of Steven J. Dick. There are specific conclusions of
practical interest to be drawn from coupling of these reasonable assumptions
with the astrophysical and astrochemical structure of the Galaxy. In
particular, we suggest that the outer regions of the Galactic disk are most
likely locations for advanced SETI targets, and that intelligent communities
will tend to migrate outward through the Galaxy as their capacities of
information-processing increase, for both thermodynamical and astrochemical
reasons. This can also be regarded as a possible generalization of the Galactic
Habitable Zone, concept currently much investigated in astrobiology.
|
astro-ph/0506308
|
Fast directional continuous spherical wavelet transform algorithms
|
astro-ph cs.IT math.IT
|
We describe the construction of a spherical wavelet analysis through the
inverse stereographic projection of the Euclidean planar wavelet framework,
introduced originally by Antoine and Vandergheynst and developed further by
Wiaux et al. Fast algorithms for performing the directional continuous wavelet
analysis on the unit sphere are presented. The fast directional algorithm,
based on the fast spherical convolution algorithm developed by Wandelt and
Gorski, provides a saving of O(sqrt(Npix)) over a direct quadrature
implementation for Npix pixels on the sphere, and allows one to perform a
directional spherical wavelet analysis of a 10^6 pixel map on a personal
computer.
|
astro-ph/0605042
|
How accurate are the time delay estimates in gravitational lensing?
|
astro-ph cs.LG
|
We present a novel approach to estimate the time delay between light curves
of multiple images in a gravitationally lensed system, based on Kernel methods
in the context of machine learning. We perform various experiments with
artificially generated irregularly-sampled data sets to study the effect of the
various levels of noise and the presence of gaps of various size in the
monitoring data. We compare the performance of our method with various other
popular methods of estimating the time delay and conclude, from experiments
with artificial data, that our method is least vulnerable to missing data and
irregular sampling, within reasonable bounds of Gaussian noise. Thereafter, we
use our method to determine the time delays between the two images of quasar
Q0957+561 from radio monitoring data at 4 cm and 6 cm, and conclude that if
only the observations at epochs common to both wavelengths are used, the time
delay gives consistent estimates, which can be combined to yield 408\pm 12
days. The full 6 cm dataset, which covers a longer monitoring period, yields a
value which is 10% larger, but this can be attributed to differences in
sampling and missing data.
|
astro-ph/0609159
|
A directional continuous wavelet transform on the sphere
|
astro-ph cs.IT math.IT
|
A new construction of a directional continuous wavelet analysis on the sphere
is derived herein. We adopt the harmonic scaling idea for the spherical
dilation operator recently proposed by Sanz et al. but extend the analysis to a
more general directional framework. Directional wavelets are a powerful
extension that allow one to also probe oriented structure in the analysed
function. Our spherical wavelet methodology has the advantage that all
functions and operators are defined directly on the sphere. The construction of
wavelets in our framework is demonstrated with an example.
|
astro-ph/0612688
|
Optimal filters on the sphere
|
astro-ph cs.IT math.IT
|
We derive optimal filters on the sphere in the context of detecting compact
objects embedded in a stochastic background process. The matched filter and the
scale adaptive filter are derived on the sphere in the most general setting,
allowing for directional template profiles and filters. The performance and
relative merits of the two optimal filters are discussed. The application of
optimal filter theory on the sphere to the detection of compact objects is
demonstrated on simulated mock data. A naive detection strategy is adopted,
with an initial aim of illustrating the application of the new optimal filters
derived on the sphere. Nevertheless, this simple object detection strategy is
demonstrated to perform well, even a low signal-to-noise ratio. Code written to
compute optimal filters on the sphere (S2FIL), to perform fast directional
filtering on the sphere (FastCSWT) and to construct the simulated mock data
(COMB) are all made publicly available from http://www.mrao.cam.ac.uk/~jdm57/
|
cmp-lg/9404001
|
An Alternative Conception of Tree-Adjoining Derivation
|
cmp-lg cs.CL
|
The precise formulation of derivation for tree-adjoining grammars has
important ramifications for a wide variety of uses of the formalism, from
syntactic analysis to semantic interpretation and statistical language
modeling. We argue that the definition of tree-adjoining derivation must be
reformulated in order to manifest the proper linguistic dependencies in
derivations. The particular proposal is both precisely characterizable through
a definition of TAG derivations as equivalence classes of ordered derivation
trees, and computationally operational, by virtue of a compilation to linear
indexed grammars together with an efficient algorithm for recognition and
parsing according to the compiled grammar.
|
cmp-lg/9404002
|
Lessons from a Restricted Turing Test
|
cmp-lg cs.CL
|
We report on the recent Loebner prize competition inspired by Turing's test
of intelligent behavior. The presentation covers the structure of the
competition and the outcome of its first instantiation in an actual event, and
an analysis of the purpose, design, and appropriateness of such a competition.
We argue that the competition has no clear purpose, that its design prevents
any useful outcome, and that such a competition is inappropriate given the
current level of technology. We then speculate as to suitable alternatives to
the Loebner prize.
|
cmp-lg/9404003
|
Restricting the Weak-Generative Capacity of Synchronous Tree-Adjoining
Grammars
|
cmp-lg cs.CL
|
The formalism of synchronous tree-adjoining grammars, a variant of standard
tree-adjoining grammars (TAG), was intended to allow the use of TAGs for
language transduction in addition to language specification. In previous work,
the definition of the transduction relation defined by a synchronous TAG was
given by appeal to an iterative rewriting process. The rewriting definition of
derivation is problematic in that it greatly extends the expressivity of the
formalism and makes the design of parsing algorithms difficult if not
impossible. We introduce a simple, natural definition of synchronous
tree-adjoining derivation, based on isomorphisms between standard
tree-adjoining derivations, that avoids the expressivity and implementability
problems of the original rewriting definition. The decrease in expressivity,
which would otherwise make the method unusable, is offset by the incorporation
of an alternative definition of standard tree-adjoining derivation, previously
proposed for completely separate reasons, thereby making it practical to
entertain using the natural definition of synchronous derivation. Nonetheless,
some remaining problematic cases call for yet more flexibility in the
definition; the isomorphism requirement may have to be relaxed. It remains for
future research to tune the exact requirements on the allowable mappings.
|
cmp-lg/9404004
|
An Empirically Motivated Reinterpretation of Dependency Grammar
|
cmp-lg cs.CL
|
Dependency grammar is usually interpreted as equivalent to a strict form of
X--bar theory that forbids the stacking of nodes of the same bar level (e.g.,
N' immediately dominating N' with the same head). But adequate accounts of
_one_--anaphora and of the semantics of multiple modifiers require such
stacking and accordingly argue against dependency grammar. Dependency grammar
can be salvaged by reinterpreting its claims about phrase structure, so that
modifiers map onto binary--branching X--bar trees rather than ``flat'' ones.
|
cmp-lg/9404005
|
Memoization in Constraint Logic Programming
|
cmp-lg cs.CL
|
This paper shows how to apply memoization (caching of subgoals and associated
answer substitutions) in a constraint logic programming setting. The research
is is motivated by the desire to apply constraint logic programming (CLP) to
problems in natural language processing that involve (constraint) interleaving
or coroutining, such as GB and HPSG parsing.
|
cmp-lg/9404006
|
SPANISH 1992 (S92): corpus-based analysis of present-day Spanish for
medical purposes
|
cmp-lg cs.CL
|
S92 research was begun in 1987 to analyze word frequencies in present-day
Spanish for making speech pathology evaluation tools. 500 2,000-word samples of
children, adolescents and adults' language were input between 1988-1991,
calculations done in 1992; statistical and Lewandowski analyses were carried
out in 1993.
|
cmp-lg/9404007
|
Constraint-Based Categorial Grammar
|
cmp-lg cs.CL
|
We propose a generalization of Categorial Grammar in which lexical categories
are defined by means of recursive constraints. In particular, the introduction
of relational constraints allows one to capture the effects of (recursive)
lexical rules in a computationally attractive manner. We illustrate the
linguistic merits of the new approach by showing how it accounts for the syntax
of Dutch cross-serial dependencies and the position and scope of adjuncts in
such constructions. Delayed evaluation is used to process grammars containing
recursive constraints.
|
cmp-lg/9404008
|
Principles and Implementation of Deductive Parsing
|
cmp-lg cs.CL
|
We present a system for generating parsers based directly on the metaphor of
parsing as deduction. Parsing algorithms can be represented directly as
deduction systems, and a single deduction engine can interpret such deduction
systems so as to implement the corresponding parser. The method generalizes
easily to parsers for augmented phrase structure formalisms, such as
definite-clause grammars and other logic grammar formalisms, and has been used
for rapid prototyping of parsing algorithms for a variety of formalisms
including variants of tree-adjoining grammars, categorial grammars, and
lexicalized context-free grammars.
|
cmp-lg/9404009
|
A Deductive Account of Quantification in LFG
|
cmp-lg cs.CL
|
The relationship between Lexical-Functional Grammar (LFG) functional
structures (f-structures) for sentences and their semantic interpretations can
be expressed directly in a fragment of linear logic in a way that explains
correctly the constrained interactions between quantifier scope ambiguity and
bound anaphora. The use of a deductive framework to account for the
compositional properties of quantifying expressions in natural language
obviates the need for additional mechanisms, such as Cooper storage, to
represent the different scopes that a quantifier might take. Instead, the
semantic contribution of a quantifier is recorded as an ordinary logical
formula, one whose use in a proof will establish the scope of the quantifier.
The properties of linear logic ensure that each quantifier is scoped exactly
once. Our analysis of quantifier scope can be seen as a recasting of Pereira's
analysis (Pereira, 1991), which was expressed in higher-order intuitionistic
logic. But our use of LFG and linear logic provides a much more direct and
computationally more flexible interpretation mechanism for at least the same
range of phenomena. We have developed a preliminary Prolog implementation of
the linear deductions described in this work.
|
cmp-lg/9404010
|
Intensional Verbs Without Type-Raising or Lexical Ambiguity
|
cmp-lg cs.CL
|
We present an analysis of the semantic interpretation of intensional verbs
such as seek that allows them to take direct objects of either individual or
quantifier type, producing both de dicto and de re readings in the quantifier
case, all without needing to stipulate type-raising or quantifying-in rules.
This simple account follows directly from our use of logical deduction in
linear logic to express the relationship between syntactic structures and
meanings. While our analysis resembles current categorial approaches in
important ways, it differs from them in allowing the greater type flexibility
of categorial semantics while maintaining a precise connection to syntax. As a
result, we are able to provide derivations for certain readings of sentences
with intensional verbs and complex direct objects that are not derivable in
current purely categorial accounts of the syntax-semantics interface. The
analysis forms a part of our ongoing work on semantic interpretation within the
framework of Lexical-Functional Grammar.
|
cmp-lg/9404011
|
Adjuncts and the Processing of Lexical Rules
|
cmp-lg cs.CL
|
The standard HPSG analysis of Germanic verb clusters can not explain the
observed narrow-scope readings of adjuncts in such verb clusters. We present an
extension of the HPSG analysis that accounts for the systematic ambiguity of
the scope of adjuncts in verb cluster constructions, by treating adjuncts as
members of the subcat list. The extension uses powerful recursive lexical
rules, implemented as complex constraints. We show how `delayed evaluation'
techniques from constraint-logic programming can be used to process such
lexical rules.
|
cmp-lg/9405001
|
Similarity-Based Estimation of Word Cooccurrence Probabilities
|
cmp-lg cs.CL
|
In many applications of natural language processing it is necessary to
determine the likelihood of a given word combination. For example, a speech
recognizer may need to determine which of the two word combinations ``eat a
peach'' and ``eat a beach'' is more likely. Statistical NLP methods determine
the likelihood of a word combination according to its frequency in a training
corpus. However, the nature of language is such that many word combinations are
infrequent and do not occur in a given corpus. In this work we propose a method
for estimating the probability of such previously unseen word combinations
using available information on ``most similar'' words. We describe a
probabilistic word association model based on distributional word similarity,
and apply it to improving probability estimates for unseen word bigrams in a
variant of Katz's back-off model. The similarity-based method yields a 20%
perplexity improvement in the prediction of unseen bigrams and statistically
significant reductions in speech-recognition error.
|
cmp-lg/9405002
|
Temporal Relations: Reference or Discourse Coherence?
|
cmp-lg cs.CL
|
The temporal relations that hold between events described by successive
utterances are often left implicit or underspecified. We address the role of
two phenomena with respect to the recovery of these relations: (1) the
referential properties of tense, and (2) the role of temporal constraints
imposed by coherence relations. We account for several facets of the
identification of temporal relations through an integration of these.
|
cmp-lg/9405003
|
Some Bibliographical References on Intonation and Intonational Meaning
|
cmp-lg cs.CL
|
A by-no-means-complete collection of references for those interested in
intonational meaning, with other miscellaneous references on intonation
included. Additional references are welcome, and should be sent to
julia@research.att.com.
|
cmp-lg/9405004
|
Syntactic-Head-Driven Generation
|
cmp-lg cs.CL
|
The previously proposed semantic-head-driven generation methods run into
problems if none of the daughter constituents in the syntacto-semantic rule
schemata of a grammar fits the definition of a semantic head given in Shieber
et al. 1990. This is the case for the semantic analysis rules of certain
constraint-based semantic representations, e.g. Underspecified Discourse
Representation Structures (UDRSs) (Frank/Reyle 1992). Since head-driven
generation in general has its merits, we simply return to a syntactic
definition of `head' and demonstrate the feasibility of syntactic-head-driven
generation. In addition to its generality, a syntactic-head-driven algorithm
provides a basis for a logically well-defined treatment of the movement of
(syntactic) heads, for which only ad-hoc solutions existed, so far.
|
cmp-lg/9405005
|
Pearl: A Probabilistic Chart Parser
|
cmp-lg cs.CL
|
This paper describes a natural language parsing algorithm for unrestricted
text which uses a probability-based scoring function to select the "best" parse
of a sentence. The parser, Pearl, is a time-asynchronous bottom-up chart parser
with Earley-type top-down prediction which pursues the highest-scoring theory
in the chart, where the score of a theory represents the extent to which the
context of the sentence predicts that interpretation. This parser differs from
previous attempts at stochastic parsers in that it uses a richer form of
conditional probabilities based on context to predict likelihood. Pearl also
provides a framework for incorporating the results of previous work in
part-of-speech assignment, unknown word models, and other probabilistic models
of linguistic features into one parsing tool, interleaving these techniques
instead of using the traditional pipeline architecture. In preliminary tests,
Pearl has been successful at resolving part-of-speech and word (in speech
processing) ambiguity, determining categories for unknown words, and selecting
correct parses first using a very loosely fitting covering grammar.
|
cmp-lg/9405006
|
Efficiency, Robustness, and Accuracy in Picky Chart Parsing
|
cmp-lg cs.CL
|
This paper describes Picky, a probabilistic agenda-based chart parsing
algorithm which uses a technique called {\em probabilistic prediction} to
predict which grammar rules are likely to lead to an acceptable parse of the
input. Using a suboptimal search method, Picky significantly reduces the number
of edges produced by CKY-like chart parsing algorithms, while maintaining the
robustness of pure bottom-up parsers and the accuracy of existing probabilistic
parsers. Experiments using Picky demonstrate how probabilistic modelling can
impact upon the efficiency, robustness and accuracy of a parser.
|
cmp-lg/9405007
|
Towards History-based Grammars: Using Richer Models for Probabilistic
Parsing
|
cmp-lg cs.CL
|
We describe a generative probabilistic model of natural language, which we
call HBG, that takes advantage of detailed linguistic information to resolve
ambiguity. HBG incorporates lexical, syntactic, semantic, and structural
information from the parse tree into the disambiguation process in a novel way.
We use a corpus of bracketed sentences, called a Treebank, in combination with
decision tree building to tease out the relevant aspects of a parse tree that
will determine the correct parse of a sentence. This stands in contrast to the
usual approach of further grammar tailoring via the usual linguistic
introspection in the hope of generating the correct parse. In head-to-head
tests against one of the best existing robust probabilistic parsing models,
which we call P-CFG, the HBG model significantly outperforms P-CFG, increasing
the parsing accuracy rate from 60% to 75%, a 37% reduction in error.
|
cmp-lg/9405008
|
A Stochastic Finite-State Word-Segmentation Algorithm for Chinese
|
cmp-lg cs.CL
|
We present a stochastic finite-state model for segmenting Chinese text into
dictionary entries and productively derived words, and providing pronunciations
for these words; the method incorporates a class-based model in its treatment
of personal names. We also evaluate the system's performance, taking into
account the fact that people often do not agree on a single segmentation.
|
cmp-lg/9405009
|
Natural Language Parsing as Statistical Pattern Recognition
|
cmp-lg cs.CL
|
Traditional natural language parsers are based on rewrite rule systems
developed in an arduous, time-consuming manner by grammarians. A majority of
the grammarian's efforts are devoted to the disambiguation process, first
hypothesizing rules which dictate constituent categories and relationships
among words in ambiguous sentences, and then seeking exceptions and corrections
to these rules.
In this work, I propose an automatic method for acquiring a statistical
parser from a set of parsed sentences which takes advantage of some initial
linguistic input, but avoids the pitfalls of the iterative and seemingly
endless grammar development process. Based on distributionally-derived and
linguistically-based features of language, this parser acquires a set of
statistical decision trees which assign a probability distribution on the space
of parse trees given the input sentence. These decision trees take advantage of
significant amount of contextual information, potentially including all of the
lexical information in the sentence, to produce highly accurate statistical
models of the disambiguation process. By basing the disambiguation criteria
selection on entropy reduction rather than human intuition, this parser
development method is able to consider more sentences than a human grammarian
can when making individual disambiguation rules.
In experiments between a parser, acquired using this statistical framework,
and a grammarian's rule-based parser, developed over a ten-year period, both
using the same training material and test sentences, the decision tree parser
significantly outperformed the grammar-based parser on the accuracy measure
which the grammarian was trying to maximize, achieving an accuracy of 78%
compared to the grammar-based parser's 69%.
|
cmp-lg/9405010
|
Common Topics and Coherent Situations: Interpreting Ellipsis in the
Context of Discourse Inference
|
cmp-lg cs.CL
|
It is claimed that a variety of facts concerning ellipsis, event reference,
and interclausal coherence can be explained by two features of the linguistic
form in question: (1) whether the form leaves behind an empty constituent in
the syntax, and (2) whether the form is anaphoric in the semantics. It is
proposed that these features interact with one of two types of discourse
inference, namely {\it Common Topic} inference and {\it Coherent Situation}
inference. The differing ways in which these types of inference utilize
syntactic and semantic representations predicts phenomena for which it is
otherwise difficult to account.
|
cmp-lg/9405011
|
A Plan-Based Model for Response Generation in Collaborative
Task-Oriented Dialogues
|
cmp-lg cs.CL
|
This paper presents a plan-based architecture for response generation in
collaborative consultation dialogues, with emphasis on cases in which the
system (consultant) and user (executing agent) disagree. Our work contributes
to an overall system for collaborative problem-solving by providing a
plan-based framework that captures the {\em Propose-Evaluate-Modify} cycle of
collaboration, and by allowing the system to initiate subdialogues to negotiate
proposed additions to the shared plan and to provide support for its claims. In
addition, our system handles in a unified manner the negotiation of proposed
domain actions, proposed problem-solving actions, and beliefs proposed by
discourse actions. Furthermore, it captures cooperative responses within the
collaborative framework and accounts for why questions are sometimes never
answered.
|
cmp-lg/9405012
|
Integration Of Visual Inter-word Constraints And Linguistic Knowledge In
Degraded Text Recognition
|
cmp-lg cs.CL
|
Degraded text recognition is a difficult task. Given a noisy text image, a
word recognizer can be applied to generate several candidates for each word
image. High-level knowledge sources can then be used to select a decision from
the candidate set for each word image. In this paper, we propose that visual
inter-word constraints can be used to facilitate candidate selection. Visual
inter-word constraints provide a way to link word images inside the text page,
and to interpret them systematically.
|
cmp-lg/9405013
|
Collaboration on reference to objects that are not mutually known
|
cmp-lg cs.CL
|
In conversation, a person sometimes has to refer to an object that is not
previously known to the other participant. We present a plan-based model of how
agents collaborate on reference of this sort. In making a reference, an agent
uses the most salient attributes of the referent. In understanding a reference,
an agent determines his confidence in its adequacy as a means of identifying
the referent. To collaborate, the agents use judgment, suggestion, and
elaboration moves to refashion an inadequate referring expression.
|
cmp-lg/9405014
|
Classifying Cue Phrases in Text and Speech Using Machine Learning
|
cmp-lg cs.CL
|
Cue phrases may be used in a discourse sense to explicitly signal discourse
structure, but also in a sentential sense to convey semantic rather than
structural information. This paper explores the use of machine learning for
classifying cue phrases as discourse or sentential. Two machine learning
programs (Cgrendel and C4.5) are used to induce classification rules from sets
of pre-classified cue phrases and their features. Machine learning is shown to
be an effective technique for not only automating the generation of
classification rules, but also for improving upon previous results.
|
cmp-lg/9405015
|
Intention-based Segmentation: Human Reliability and Correlation with
Linguistic Cues
|
cmp-lg cs.CL
|
Certain spans of utterances in a discourse, referred to here as segments, are
widely assumed to form coherent units. Further, the segmental structure of
discourse has been claimed to constrain and be constrained by many phenomena.
However, there is weak consensus on the nature of segments and the criteria for
recognizing or generating them. We present quantitative results of a two part
study using a corpus of spontaneous, narrative monologues. The first part
evaluates the statistical reliability of human segmentation of our corpus,
where speaker intention is the segmentation criterion. We then use the
subjects' segmentations to evaluate the correlation of discourse segmentation
with three linguistic cues (referential noun phrases, cue words, and pauses),
using information retrieval metrics.
|
cmp-lg/9405016
|
Precise n-gram Probabilities from Stochastic Context-free Grammars
|
cmp-lg cs.CL
|
We present an algorithm for computing n-gram probabilities from stochastic
context-free grammars, a procedure that can alleviate some of the standard
problems associated with n-grams (estimation from sparse data, lack of
linguistic structure, among others). The method operates via the computation of
substring expectations, which in turn is accomplished by solving systems of
linear equations derived from the grammar. We discuss efficient implementation
of the algorithm and report our practical experience with it.
|
cmp-lg/9405017
|
Best-first Model Merging for Hidden Markov Model Induction
|
cmp-lg cs.CL
|
This report describes a new technique for inducing the structure of Hidden
Markov Models from data which is based on the general `model merging' strategy
(Omohundro 1992). The process begins with a maximum likelihood HMM that
directly encodes the training data. Successively more general models are
produced by merging HMM states. A Bayesian posterior probability criterion is
used to determine which states to merge and when to stop generalizing. The
procedure may be considered a heuristic search for the HMM structure with the
highest posterior probability. We discuss a variety of possible priors for
HMMs, as well as a number of approximations which improve the computational
efficiency of the algorithm. We studied three applications to evaluate the
procedure. The first compares the merging algorithm with the standard
Baum-Welch approach in inducing simple finite-state languages from small,
positive-only training samples. We found that the merging procedure is more
robust and accurate, particularly with a small amount of training data. The
second application uses labelled speech data from the TIMIT database to build
compact, multiple-pronunciation word models that can be used in speech
recognition. Finally, we describe how the algorithm was incorporated in an
operational speech understanding system, where it is combined with neural
network acoustic likelihood estimators to improve performance over
single-pronunciation word models.
|
cmp-lg/9405018
|
Memory-Based Lexical Acquisition and Processing
|
cmp-lg cs.CL
|
Current approaches to computational lexicology in language technology are
knowledge-based (competence-oriented) and try to abstract away from specific
formalisms, domains, and applications. This results in severe complexity,
acquisition and reusability bottlenecks. As an alternative, we propose a
particular performance-oriented approach to Natural Language Processing based
on automatic memory-based learning of linguistic (lexical) tasks. The
consequences of the approach for computational lexicology are discussed, and
the application of the approach on a number of lexical acquisition and
disambiguation tasks in phonology, morphology and syntax is described.
|
cmp-lg/9405019
|
Determination of referential property and number of nouns in Japanese
sentences for machine translation into English
|
cmp-lg cs.CL
|
When translating Japanese nouns into English, we face the problem of articles
and numbers which the Japanese language does not have, but which are necessary
for the English composition. To solve this difficult problem we classified the
referential property and the number of nouns into three types respectively.
This paper shows that the referential property and the number of nouns in a
sentence can be estimated fairly reliably by the words in the sentence. Many
rules for the estimation were written in forms similar to rewriting rules in
expert systems. We obtained the correct recognition scores of 85.5\% and 89.0\%
in the estimation of the referential property and the number respectively for
the sentences which were used for the construction of our rules. We tested
these rules for some other texts, and obtained the scores of 68.9\% and 85.6\%
respectively.
|
cmp-lg/9405020
|
Capturing CFLs with Tree Adjoining Grammars
|
cmp-lg cs.CL
|
We define a decidable class of TAGs that is strongly equivalent to CFGs and
is cubic-time parsable. This class serves to lexicalize CFGs in the same manner
as the LCFGs of Schabes and Waters but with considerably less restriction on
the form of the grammars. The class provides a normal form for TAGs that
generate local sets in much the same way that regular grammars provide a normal
form for CFGs that generate regular sets.
|
cmp-lg/9405021
|
Generating Precondition Expressions in Instructional Text
|
cmp-lg cs.CL
|
This study employs a knowledge intensive corpus analysis to identify the
elements of the communicative context which can be used to determine the
appropriate lexical and grammatical form of instructional texts. \ig, an
instructional text generation system based on this analysis, is presented,
particularly with reference to its expression of precondition relations.
|
cmp-lg/9405022
|
Grammar Specialization through Entropy Thresholds
|
cmp-lg cs.CL
|
Explanation-based generalization is used to extract a specialized grammar
from the original one using a training corpus of parse trees. This allows very
much faster parsing and gives a lower error rate, at the price of a small loss
in coverage. Previously, it has been necessary to specify the tree-cutting
criteria (or operationality criteria) manually; here they are derived
automatically from the training set and the desired coverage of the specialized
grammar. This is done by assigning an entropy value to each node in the parse
trees and cutting in the nodes with sufficiently high entropy values.
|
cmp-lg/9405023
|
An Integrated Heuristic Scheme for Partial Parse Evaluation
|
cmp-lg cs.CL
|
GLR* is a recently developed robust version of the Generalized LR Parser,
that can parse almost ANY input sentence by ignoring unrecognizable parts of
the sentence. On a given input sentence, the parser returns a collection of
parses that correspond to maximal, or close to maximal, parsable subsets of the
original input. This paper describes recent work on developing an integrated
heuristic scheme for selecting the parse that is deemed ``best'' from such a
collection. We describe the heuristic measures used and their combination
scheme. Preliminary results from experiments conducted on parsing speech
recognized spontaneous speech are also reported.
|
cmp-lg/9405024
|
Abductive Equivalential Translation and its application to Natural
Language Database Interfacing
|
cmp-lg cs.CL
|
The thesis describes a logical formalization of natural-language database
interfacing. We assume the existence of a ``natural language engine'' capable
of mediating between surface linguistic string and their representations as
``literal'' logical forms: the focus of interest will be the question of
relating ``literal'' logical forms to representations in terms of primitives
meaningful to the underlying database engine. We begin by describing the nature
of the problem, and show how a variety of interface functionalities can be
considered as instances of a type of formal inference task which we call
``Abductive Equivalential Translation'' (AET); functionalities which can be
reduced to this form include answering questions, responding to commands,
reasoning about the completeness of answers, answering meta-questions of type
``Do you know...'', and generating assertions and questions. In each case, a
``linguistic domain theory'' (LDT) $\Gamma$ and an input formula $F$ are given,
and the goal is to construct a formula with certain properties which is
equivalent to $F$, given $\Gamma$ and a set of permitted assumptions. If the
LDT is of a certain specified type, whose formulas are either conditional
equivalences or Horn-clauses, we show that the AET problem can be reduced to a
goal-directed inference method. We present an abstract description of this
method, and sketch its realization in Prolog. The relationship between AET and
several problems previously discussed in the literature is discussed. In
particular, we show how AET can provide a simple and elegant solution to the
so-called ``Doctor on Board'' problem, and in effect allows a
``relativization'' of the Closed World Assumption. The ideas in the thesis have
all been implemented concretely within the SRI CLARE project, using a real
projects and payments database. The LDT for the example database is described
in detail, and examples of the types of functionality that can be achieved
within the example domain are presented.
|
cmp-lg/9405025
|
An Optimal Tabular Parsing Algorithm
|
cmp-lg cs.CL
|
In this paper we relate a number of parsing algorithms which have been
developed in very different areas of parsing theory, and which include
deterministic algorithms, tabular algorithms, and a parallel algorithm. We show
that these algorithms are based on the same underlying ideas. By relating
existing ideas, we hope to provide an opportunity to improve some algorithms
based on features of others. A second purpose of this paper is to answer a
question which has come up in the area of tabular parsing, namely how to obtain
a parsing algorithm with the property that the table will contain as little
entries as possible, but without the possibility that two entries represent the
same subderivation.
|
cmp-lg/9405026
|
An Extended Theory of Head-Driven Parsing
|
cmp-lg cs.CL
|
We show that more head-driven parsing algorithms can be formulated than those
occurring in the existing literature. These algorithms are inspired by a family
of left-to-right parsing algorithms from a recent publication. We further
introduce a more advanced notion of ``head-driven parsing'' which allows more
detailed specification of the processing order of non-head elements in the
right-hand side. We develop a parsing algorithm for this strategy, based on LR
parsing techniques.
|
cmp-lg/9405027
|
Acquiring Receptive Morphology: A Connectionist Model
|
cmp-lg cs.CL
|
This paper describes a modular connectionist model of the acquisition of
receptive inflectional morphology. The model takes inputs in the form of phones
one at a time and outputs the associated roots and inflections. Simulations
using artificial language stimuli demonstrate the capacity of the model to
learn suffixation, prefixation, infixation, circumfixation, mutation, template,
and deletion rules. Separate network modules responsible for syllables enable
to the network to learn simple reduplication rules as well. The model also
embodies constraints against association-line crossing.
|
cmp-lg/9405028
|
Semantics of Complex Sentences in Japanese
|
cmp-lg cs.CL
|
The important part of semantics of complex sentence is captured as relations
among semantic roles in subordinate and main clause respectively. However if
there can be relations between every pair of semantic roles, the amount of
computation to identify the relations that hold in the given sentence is
extremely large. In this paper, for semantics of Japanese complex sentence, we
introduce new pragmatic roles called `observer' and `motivated' respectively to
bridge semantic roles of subordinate and those of main clauses. By these new
roles constraints on the relations among semantic/pragmatic roles are known to
be almost local within subordinate or main clause. In other words, as for the
semantics of the whole complex sentence, the only role we should deal with is a
motivated.
|
cmp-lg/9405029
|
Structural Tags, Annealing and Automatic Word Classification
|
cmp-lg cs.CL
|
This paper describes an automatic word classification system which uses a
locally optimal annealing algorithm and average class mutual information. A new
word-class representation, the structural tag is introduced and its advantages
for use in statistical language modelling are presented. A summary of some
results with the one million word LOB corpus is given; the algorithm is also
shown to discover the vowel-consonant distinction and displays an ability to
cluster words syntactically in a Latin corpus. Finally, a comparison is made
between the current classification system and several leading alternative
systems, which shows that the current system performs respectably well.
|
cmp-lg/9405030
|
Priority Union and Generalization in Discourse Grammars
|
cmp-lg cs.CL
|
We describe an implementation in Carpenter's typed feature formalism, ALE, of
a discourse grammar of the kind proposed by Scha, Polanyi, et al. We examine
their method for resolving parallelism-dependent anaphora and show that there
is a coherent feature-structural rendition of this type of grammar which uses
the operations of priority union and generalization. We describe an
augmentation of the ALE system to encompass these operations and we show that
an appropriate choice of definition for priority union gives the desired
multiple output for examples of VP-ellipsis which exhibit a strict/sloppy
ambiguity.
|
cmp-lg/9405031
|
An Attributive Logic of Set Descriptions and Set Operations
|
cmp-lg cs.CL
|
This paper provides a model theoretic semantics to feature terms augmented
with set descriptions. We provide constraints to specify HPSG style set
descriptions, fixed cardinality set descriptions, set-membership constraints,
restricted universal role quantifications, set union, intersection, subset and
disjointness. A sound, complete and terminating consistency checking procedure
is provided to determine the consistency of any given term in the logic. It is
shown that determining consistency of terms is a NP-complete problem.
|
cmp-lg/9405032
|
Modularity in a Connectionist Model of Morphology Acquisition
|
cmp-lg cs.CL
|
This paper describes a modular connectionist model of the acquisition of
receptive inflectional morphology. The model takes inputs in the form of phones
one at a time and outputs the associated roots and inflections. In its simplest
version, the network consists of separate simple recurrent subnetworks for root
and inflection identification; both networks take the phone sequence as inputs.
It is shown that the performance of the two separate modular networks is
superior to a single network responsible for both root and inflection
identification. In a more elaborate version of the model, the network learns to
use separate hidden-layer modules to solve the separate tasks of root and
inflection identification.
|
cmp-lg/9405033
|
Relating Complexity to Practical Performance in Parsing with
Wide-Coverage Unification Grammars
|
cmp-lg cs.CL
|
The paper demonstrates that exponential complexities with respect to grammar
size and input length have little impact on the performance of three
unification-based parsing algorithms, using a wide-coverage grammar. The
results imply that the study and optimisation of unification-based parsing must
rely on empirical data until complexity theory can more accurately predict the
practical behaviour of such parsers.
|
cmp-lg/9405034
|
Extracting Noun Phrases from Large-Scale Texts: A Hybrid Approach and
Its Automatic Evaluation
|
cmp-lg cs.CL
|
To acquire noun phrases from running texts is useful for many applications,
such as word grouping,terminology indexing, etc. The reported literatures adopt
pure probabilistic approach, or pure rule-based noun phrases grammar to tackle
this problem. In this paper, we apply a probabilistic chunker to deciding the
implicit boundaries of constituents and utilize the linguistic knowledge to
extract the noun phrases by a finite state mechanism. The test texts are
SUSANNE Corpus and the results are evaluated by comparing the parse field of
SUSANNE Corpus automatically. The results of this preliminary experiment are
encouraging.
|
cmp-lg/9405035
|
Dual-Coding Theory and Connectionist Lexical Selection
|
cmp-lg cs.CL
|
We introduce the bilingual dual-coding theory as a model for bilingual mental
representation. Based on this model, lexical selection neural networks are
implemented for a connectionist transfer project in machine translation. This
lexical selection approach has two advantages. First, it is learnable. Little
human effort on knowledge engineering is required. Secondly, it is
psycholinguistically well-founded.
|
cmp-lg/9406001
|
Intentions and Information in Discourse
|
cmp-lg cs.CL
|
This paper is about the flow of inference between communicative intentions,
discourse structure and the domain during discourse processing. We augment a
theory of discourse interpretation with a theory of distinct mental attitudes
and reasoning about them, in order to provide an account of how the attitudes
interact with reasoning about discourse structure.
|
cmp-lg/9406002
|
Speech Dialogue with Facial Displays: Multimodal Human-Computer
Conversation
|
cmp-lg cs.CL
|
Human face-to-face conversation is an ideal model for human-computer
dialogue. One of the major features of face-to-face communication is its
multiplicity of communication channels that act on multiple modalities. To
realize a natural multimodal dialogue, it is necessary to study how humans
perceive information and determine the information to which humans are
sensitive. A face is an independent communication channel that conveys
emotional and conversational signals, encoded as facial expressions. We have
developed an experimental system that integrates speech dialogue and facial
animation, to investigate the effect of introducing communicative facial
expressions as a new modality in human-computer conversation. Our experiments
have shown that facial expressions are helpful, especially upon first contact
with the system. We have also discovered that featuring facial expressions at
an early stage improves subsequent interaction.
|
cmp-lg/9406003
|
A Learning Approach to Natural Language Understanding
|
cmp-lg cs.CL
|
In this paper we propose a learning paradigm for the problem of understanding
spoken language. The basis of the work is in a formalization of the
understanding problem as a communication problem. This results in the
definition of a stochastic model of the production of speech or text starting
from the meaning of a sentence. The resulting understanding algorithm consists
in a Viterbi maximization procedure, analogous to that commonly used for
recognizing speech. The algorithm was implemented for building
|
cmp-lg/9406004
|
Towards a Principled Representation of Discourse Plans
|
cmp-lg cs.CL
|
We argue that discourse plans must capture the intended causal and
decompositional relations between communicative actions. We present a planning
algorithm, DPOCL, that builds plan structures that properly capture these
relations, and show how these structures are used to solve the problems that
plagued previous discourse planners, and allow a system to participate
effectively and flexibly in an ongoing dialogue.
|
cmp-lg/9406005
|
Word-Sense Disambiguation Using Decomposable Models
|
cmp-lg cs.CL
|
Most probabilistic classifiers used for word-sense disambiguation have either
been based on only one contextual feature or have used a model that is simply
assumed to characterize the interdependencies among multiple contextual
features. In this paper, a different approach to formulating a probabilistic
model is presented along with a case study of the performance of models
produced in this manner for the disambiguation of the noun "interest". We
describe a method for formulating probabilistic models that use multiple
contextual features for word-sense disambiguation, without requiring untested
assumptions regarding the form of the model. Using this approach, the joint
distribution of all variables is described by only the most systematic variable
interactions, thereby limiting the number of parameters to be estimated,
supporting computational efficiency, and providing an understanding of the
data.
|
cmp-lg/9406006
|
Detecting and Correcting Speech Repairs
|
cmp-lg cs.CL
|
Interactive spoken dialog provides many new challenges for spoken language
systems. One of the most critical is the prevalence of speech repairs. This
paper presents an algorithm that detects and corrects speech repairs based on
finding the repair pattern. The repair pattern is built by finding word matches
and word replacements, and identifying fragments and editing terms. Rather than
using a set of prebuilt templates, we build the pattern on the fly. In a fair
test, our method, when combined with a statistical model to filter possible
repairs, was successful at detecting and correcting 80\% of the repairs,
without using prosodic information or a parser.
|
cmp-lg/9406007
|
Aligning a Parallel English-Chinese Corpus Statistically with Lexical
Criteria
|
cmp-lg cs.CL
|
We describe our experience with automatic alignment of sentences in parallel
English-Chinese texts. Our report concerns three related topics:
(1) progress on the HKUST English-Chinese Parallel Bilingual Corpus;
(2) experiments addressing the applicability of Gale & Church's length-based
statistical method to the task of alignment involving a non-Indo-European
language; and
(3) an improved statistical method that also incorporates domain-specific
lexical cues.
|
cmp-lg/9406008
|
Parsing Turkish with the Lexical Functional Grammar Formalism
|
cmp-lg cs.CL
|
This paper describes our work on parsing Turkish using the lexical-functional
grammar formalism. This work represents the first significant effort for
parsing Turkish. Our implementation is based on Tomita's parser developed at
Carnegie-Mellon University Center for Machine Translation. The grammar covers a
substantial subset of Turkish including simple and complex sentences, and deals
with a reasonable amount of word order freeness. The complex agglutinative
morphology of Turkish lexical structures is handled using a separate two-level
morphological analyzer. After a discussion of key relevant issues regarding
Turkish grammar, we discuss aspects of our system and present results from our
implementation. Our initial results suggest that our system can parse about
82\% of the sentences directly and almost all the remaining with very minor
pre-editing.
|
cmp-lg/9406009
|
Multiset-Valued Linear Index Grammars: Imposing Dominance Constraints on
Derivations
|
cmp-lg cs.CL
|
This paper defines multiset-valued linear index grammar and unordered vector
grammar with dominance links. The former models certain uses of multiset-valued
feature structures in unification-based formalisms, while the latter is
motivated by word order variation and by ``quasi-trees'', a generalization of
trees. The two formalisms are weakly equivalent, and an important subset is at
most context-sensitive and polynomially parsable.
|
cmp-lg/9406010
|
Some Advances in Transformation-Based Part of Speech Tagging
|
cmp-lg cs.CL
|
Most recent research in trainable part of speech taggers has explored
stochastic tagging. While these taggers obtain high accuracy, linguistic
information is captured indirectly, typically in tens of thousands of lexical
and contextual probabilities. In [Brill92], a trainable rule-based tagger was
described that obtained performance comparable to that of stochastic taggers,
but captured relevant linguistic information in a small number of simple
non-stochastic rules. In this paper, we describe a number of extensions to this
rule-based tagger. First, we describe a method for expressing lexical relations
in tagging that are not captured by stochastic taggers. Next, we show a
rule-based approach to tagging unknown words. Finally, we show how the tagger
can be extended into a k-best tagger, where multiple tags can be assigned to
words in some cases of uncertainty.
|
cmp-lg/9406011
|
Exploring the Statistical Derivation of Transformational Rule Sequences
for Part-of-Speech Tagging
|
cmp-lg cs.CL
|
Eric Brill has recently proposed a simple and powerful corpus-based language
modeling approach that can be applied to various tasks including part-of-speech
tagging and building phrase structure trees. The method learns a series of
symbolic transformational rules, which can then be applied in sequence to a
test corpus to produce predictions. The learning process only requires counting
matches for a given set of rule templates, allowing the method to survey a very
large space of possible contextual factors. This paper analyses Brill's
approach as an interesting variation on existing decision tree methods, based
on experiments involving part-of-speech tagging for both English and ancient
Greek corpora. In particular, the analysis throws light on why the new
mechanism seems surprisingly resistant to overtraining. A fast, incremental
implementation and a mechanism for recording the dependencies that underlie the
resulting rule sequence are also described.
|
cmp-lg/9406012
|
Self-Organizing Machine Translation: Example-Driven Induction of
Transfer Functions
|
cmp-lg cs.CL
|
With the advent of faster computers, the notion of doing machine translation
from a huge stored database of translation examples is no longer unreasonable.
This paper describes an attempt to merge the Example-Based Machine Translation
(EBMT) approach with psycholinguistic principles. A new formalism for context-
free grammars, called *marker-normal form*, is demonstrated and used to
describe language data in a way compatible with psycholinguistic theories. By
embedding this formalism in a standard multivariate optimization framework, a
system can be built that infers correct transfer functions for a set of
bilingual sentence pairs and then uses those functions to translate novel
sentences. The validity of this line of reasoning has been tested in the
development of a system called METLA-1. This system has been used to infer
English->French and English->Urdu transfer functions from small corpora. The
results of those experiments are examined, both in engineering terms as well as
in more linguistic terms. In general, the results of these experiments were
psycho- logically and linguistically well-grounded while still achieving a
respectable level of success when compared against a similar prototype using
Hidden Markov Models.
|
cmp-lg/9406013
|
Graded Unification: A Framework for Interactive Processing
|
cmp-lg cs.CL
|
An extension to classical unification, called {\em graded unification} is
presented. It is capable of combining contradictory information. An interactive
processing paradigm and parser based on this new operator are also presented.
|
cmp-lg/9406014
|
A Hybrid Reasoning Model for Indirect Answers
|
cmp-lg cs.CL
|
This paper presents our implemented computational model for interpreting and
generating indirect answers to Yes-No questions. Its main features are 1) a
discourse-plan-based approach to implicature, 2) a reversible architecture for
generation and interpretation, 3) a hybrid reasoning model that employs both
plan inference and logical inference, and 4) use of stimulus conditions to
model a speaker's motivation for providing appropriate, unrequested
information. The model handles a wider range of types of indirect answers than
previous computational models and has several significant advantages.
|
cmp-lg/9406015
|
Statistical Augmentation of a Chinese Machine-Readable Dictionary
|
cmp-lg cs.CL
|
We describe a method of using statistically-collected Chinese character
groups from a corpus to augment a Chinese dictionary. The method is
particularly useful for extracting domain-specific and regional words not
readily available in machine-readable dictionaries. Output was evaluated both
using human evaluators and against a previously available dictionary. We also
evaluated performance improvement in automatic Chinese tokenization. Results
show that our method outputs legitimate words, acronymic constructions, idioms,
names and titles, as well as technical compounds, many of which were lacking
from the original dictionary.
|
cmp-lg/9406016
|
Corpus-Driven Knowledge Acquisition for Discourse Analysis
|
cmp-lg cs.CL
|
The availability of large on-line text corpora provides a natural and
promising bridge between the worlds of natural language processing (NLP) and
machine learning (ML). In recent years, the NLP community has been aggressively
investigating statistical techniques to drive part-of-speech taggers, but
application-specific text corpora can be used to drive knowledge acquisition at
much higher levels as well. In this paper we will show how ML techniques can be
used to support knowledge acquisition for information extraction systems. It is
often very difficult to specify an explicit domain model for many information
extraction applications, and it is always labor intensive to implement
hand-coded heuristics for each new domain. We have discovered that it is
nevertheless possible to use ML algorithms in order to capture knowledge that
is only implicitly present in a representative text corpus. Our work addresses
issues traditionally associated with discourse analysis and intersentential
inference generation, and demonstrates the utility of ML algorithms at this
higher level of language analysis. The benefits of our work address the
portability and scalability of information extraction (IE) technologies. When
hand-coded heuristics are used to manage discourse analysis in an information
extraction system, months of programming effort are easily needed to port a
successful IE system to a new domain. We will show how ML algorithms can reduce
this
|
cmp-lg/9406017
|
An Automatic Method of Finding Topic Boundaries
|
cmp-lg cs.CL
|
This article outlines a new method of locating discourse boundaries based on
lexical cohesion and a graphical technique called dotplotting. The application
of dotplotting to discourse segmentation can be performed either manually, by
examining a graph, or automatically, using an optimization algorithm. The
results of two experiments involving automatically locating boundaries between
a series of concatenated documents are presented. Areas of application and
future directions for this work are also outlined.
|
cmp-lg/9406018
|
TDL--- A Type Description Language for Constraint-Based Grammars
|
cmp-lg cs.CL
|
This paper presents \tdl, a typed feature-based representation language and
inference system. Type definitions in \tdl\ consist of type and feature
constraints over the boolean connectives. \tdl\ supports open- and closed-world
reasoning over types and allows for partitions and incompatible types. Working
with partially as well as with fully expanded types is possible. Efficient
reasoning in \tdl\ is accomplished through specialized modules.
|
cmp-lg/9406019
|
A Complete and Recursive Feature Theory
|
cmp-lg cs.CL
|
Various feature descriptions are being employed in logic programming
languages and constrained-based grammar formalisms. The common notational
primitive of these descriptions are functional attributes called features. The
descriptions considered in this paper are the possibly quantified first-order
formulae obtained from a signature of binary and unary predicates called
features and sorts, respectively. We establish a first-order theory FT by means
of three axiom schemes, show its completeness, and construct three elementarily
equivalent models. One of the models consists of so-called feature graphs, a
data structure common in computational linguistics. The other two models
consist of so-called feature trees, a record-like data structure generalizing
the trees corresponding to first-order terms. Our completeness proof exhibits a
terminating simplification system deciding validity and satisfiability of
possibly quantified feature descriptions.
|
cmp-lg/9406020
|
DPOCL: A Principled Approach to Discourse Planning
|
cmp-lg cs.CL
|
Research in discourse processing has identified two representational
requirements for discourse planning systems. First, discourse plans must
adequately represent the intentional structure of the utterances they produce
in order to enable a computational discourse agent to respond effectively to
communicative failures \cite{MooreParisCL}. Second, discourse plans must
represent the informational structure of utterances. In addition to these
representational requirements, we argue that discourse planners should be
formally characterizable in terms of soundness and completeness.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.