NL-TO-FOl
Collection
Models fine-tuned for NL-FOL translation • 40 items • Updated
This model is a fully fine-tuned version of google-t5/t5-base. It was trained to translate natural language statements into First-Order Logic (FOL) representations.
google/t5-base, which is released under the Apache 2.0 License, and is itself released under the Apache 2.0 License. This model is designed to translate natural language (NL) sentences into corresponding first-order logic (FOL) expressions. Use cases include:
Users should verify and validate symbolic formulas generated by the model for correctness depending on the application.
This model can be further fine-tuned or adapted for domain-specific formalization tasks (e.g., legal, biomedical). Suitable for interactive systems requiring formal reasoning.
import torch
from transformers import T5Tokenizer, T5ForConditionalGeneration
# Load tokenizer and model
model_path = "fvossel/t5-base-nl-to-fol"
tokenizer = T5Tokenizer.from_pretrained(model_path)
model = T5ForConditionalGeneration.from_pretrained(model_path).to("cuda")
# Example NL input
nl_input = "All dogs are animals."
# Preprocess prompt
input_text = "translate English natural language statements into first-order logic (FOL): " + nl_input
inputs = tokenizer(input_text, return_tensors="pt", padding=True).to("cuda")
# Generate prediction
with torch.no_grad():
outputs = model.generate(
inputs["input_ids"],
max_length=256,
min_length=1,
num_beams=5,
length_penalty=2.0,
early_stopping=True,
)
# Decode and print result
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
The model was fine-tuned on the groves dataset.
The model was fully fine-tuned (no LoRA) from google/t5-base with:
∀) with tokens like FORALL Base model
google-t5/t5-base