--- language: en tags: - text-classification - requirements-engineering - bert datasets: - promise-nfr metrics: - accuracy - f1 model-index: - name: RequirementClassifier results: - task: type: text-classification name: Requirement Classification dataset: name: PROMISE NFR type: promise-nfr metrics: - type: accuracy name: Accuracy value: 0.0 --- # RequirementClassifier Version: 27 ## Model Description This model is a fine-tuned BERT model for binary classification of software requirements. It classifies text as either "requirement" or "non-requirement". ## Intended Uses - Classify software requirement documents - Identify requirement vs non-requirement statements - Automated requirement extraction from documents ## Training Data The model was trained on the PROMISE NFR dataset with additional non-requirement examples. ## Usage ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch # Load model and tokenizer tokenizer = AutoTokenizer.from_pretrained("rajinikarcg/RequirementClassifier") model = AutoModelForSequenceClassification.from_pretrained("rajinikarcg/RequirementClassifier") # Prepare input text = "The system shall respond within 2 seconds" inputs = tokenizer(text, return_tensors="pt", truncation=True, max_length=128) # Get prediction with torch.no_grad(): outputs = model(**inputs) logits = outputs.logits prediction = torch.argmax(logits, dim=-1).item() # Map to label labels = ["non-requirement", "requirement"] print(f"Prediction: {labels[prediction]}") ``` ## Version History - 27: Latest version ## Citation If you use this model, please cite the PROMISE NFR dataset.