Text Classification
Transformers
Safetensors
English
bert
creative writing
original ip
text-embeddings-inference
# Load model directly
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("niltheory/ExistenceTypesAnalysis")
model = AutoModelForSequenceClassification.from_pretrained("niltheory/ExistenceTypesAnalysis")Quick Links
Existence Analysis Model (EAM)
Created for: Compendium Terminum, IP
Base Model: bert-large-cased-whole-word-masking
Iterative Development
Iteration #1:
- Initial Model: Utilized
distilBertfor foundational training. - Dataset Size: 96 entries.
- Outcome: Established baseline for accuracy metrics.
Iteration #2:
- Model Upgrade: Transitioned to
bert-base-uncasedfromdistilbert-base-uncased. - Dataset Expansion: Increased from 96 to 296 entries.
- Performance: Improved accuracy scores; identified edge cases for refinement.
Iteration #3:
- Model Upgrade: Transitioned to
bert-large-cased-whole-word-maskingfrombert-base-uncased. - Advancements: Enhanced contextual sensitivity and accuracy.
- Results: Demonstrated more nuanced understanding and sensitivity in predictions.
Observations
- Each iteration has contributed to the model's evolving sophistication, leading to improved interpretive performance and accuracy.
- Continuous evaluation, especially in complex or ambiguous cases, is pivotal for future enhancements.
License
This dataset is licensed under CC BY-NC-SA 4.0. Users are free to use, modify, and share it under the same terms, but commercial use is prohibited.
- Downloads last month
- 7
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="niltheory/ExistenceTypesAnalysis")