Autobool
Collection
AutoBool is a reinforcement learning (RL) framework for training large language models (LLMs) to generate high-quality Boolean queries for systema
•
5 items
•
Updated
•
1
This model is part of the AutoBool framework, a reinforcement learning approach for training large language models to generate high-quality Boolean queries for systematic literature reviews.
This variant uses the conceptual method for structured query construction. The model follows a systematic 5-step process to identify concepts and build Boolean queries based on domain logic.
<think>[Step-by-step conceptual analysis]</think><answer>[Boolean query]</answer>The model was trained using:
This model is designed for:
from transformers import AutoTokenizer, AutoModelForCausalLM
import re
model_name = "ielabgroup/Autobool-Qwen4b-Reasoning-conceptual"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# Define your systematic review topic
topic = "Ultrasonography for diagnosis of alcoholic cirrhosis in people with alcoholic liver disease"
# Construct the prompt with system and user messages
messages = [
{"role": "system", "content": "You are an expert systematic review information specialist.
Formulate a systematic review Boolean query using step-by-step reasoning inside <think> </think>, and output the final query inside <answer> </answer>."},
{"role": "user", "content": f'You are given a systematic review topic titled: "{topic}".
Construct a Boolean query using the **conceptual method**, based on domain logic and structured thinking.
**Step 1**: Identify 2–3 key concepts from the topic (e.g., Population, Intervention, Outcome).
**Step 2**: For each concept:
- List related terms: synonyms, variants, relevant MeSH terms.
- Prioritise specific, high-precision terms.
**Step 3**: Create a Boolean block per concept:
- Combine terms using OR
- Use free-text terms and MeSH terms (e.g., chronic pain[tiab], Pain[mh])
- **Do not wrap terms or phrases in double quotes**, as this disables automatic term mapping (ATM)
- Tag terms individually when needed (e.g., covid-19[ti] vaccine[ti] children[ti])
- Field tags limit search scope and disable ATM
**Step 4**: Use wildcards (*) to capture word variants (e.g., vaccin* → vaccine, vaccination):
- Terms must have ≥4 characters before the * (e.g., colo*)
- Wildcards work with field tags (e.g., breastfeed*[tiab]).
**Step 5**: Combine all Boolean blocks using AND:
((Concept1_term1[tiab] OR Concept1_term2[tiab] OR Concept1_termX[mh]) AND (Concept2_...))
**Only use the following allowed field tags:**
Title: [ti], Abstract: [ab], Title/Abstract: [tiab]
MeSH: [mh], Major MeSH: [majr], Supplementary Concept: [nm]
Text Words: [tw], All Fields: [all]
Publication Type: [pt], Language: [la]
Output your full reasoning inside <think>...</think>
Output only the final Boolean query inside <answer>...</answer>
Do not include any content outside these tags.
Do not include date limits.'}
]
# Generate the query
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=4096)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
# Extract reasoning and query
reasoning_match = re.search(r'<think>(.*?)</think>', response, re.DOTALL)
query_match = re.search(r'<answer>(.*?)</answer>', response, re.DOTALL)
if reasoning_match and query_match:
reasoning = reasoning_match.group(1).strip()
query = query_match.group(1).strip()
print("Step-by-step conceptual analysis:", reasoning)
print("
Query:", query)
If you use this model, please cite:
@inproceedings{autobool2026,
title={AutoBool: Reinforcement Learning for Boolean Query Generation in Systematic Reviews},
author={[Shuai Wang, Harrisen Scells, Bevan Koopman, Guido Zuccon]},
booktitle={Proceedings of the 2026 Conference of the European Chapter of the Association for Computational Linguistics (EACL)},
year={2026}
}
Apache 2.0