--- library_name: sentence-transformers pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction --- # SentenceTransformer Repository with the model for the implementation of WikiCheck API, end-to-end open source Automatic Fact-Checking based on Wikipedia. The research was published in **CIKM2021** applied track: - *Trokhymovych, Mykola, and Diego Saez-Trumper.* **WikiCheck: An End-to-End Open Source Automatic Fact-Checking API Based on Wikipedia.** Proceedings of the 30th ACM International Conference on Information & Knowledge Management, Association for Computing Machinery, 2021, pp. 4155–4164, CIKM ’21. [![DOI:10.1145/3459637.3481961](https://zenodo.org/badge/DOI/10.1145/3459637.3481961.svg)](https://dl.acm.org/doi/10.1145/3459637.3481961) - The preprint **WikiCheck: An End-to-End Open Source Automatic Fact-Checking API Based on Wikipedia**: [![DOI:10.48550/arXiv.2109.00835](https://zenodo.org/badge/DOI/10.48550/arXiv.2109.00835.svg)]( https://doi.org/10.48550/arXiv.2109.00835) Uploaded model from the following [repo](https://github.com/trokhymovych/WikiCheck). Site: ``` @inproceedings{10.1145/3459637.3481961, author = {Trokhymovych, Mykola and Saez-Trumper, Diego}, title = {WikiCheck: An End-to-End Open Source Automatic Fact-Checking API Based on Wikipedia}, year = {2021}, isbn = {9781450384469}, publisher = {Association for Computing Machinery}, address = {New York, NY, USA}, url = {https://doi.org/10.1145/3459637.3481961}, doi = {10.1145/3459637.3481961}, booktitle = {Proceedings of the 30th ACM International Conference on Information & Knowledge Management}, pages = {4155–4164}, numpages = {10}, keywords = {applied research, nlp, nli, wikipedia, fact-checking}, location = {Virtual Event, Queensland, Australia}, series = {CIKM '21} } ``` This is a [sentence-transformers](https://www.SBERT.net) model trained. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Maximum Sequence Length:** 128 tokens - **Output Dimensionality:** 768 dimensions - **Similarity Function:** Cosine Similarity ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BartModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("arg-tech/bart_tuned_wikifact_check_ucu_trokhymovych") # Run inference sentences = [ 'The weather is lovely today.', "It's so sunny outside!", 'He drove to the stadium.', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 768] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` ## Training Details ### Framework Versions - Python: 3.9.6 - Sentence Transformers: 3.4.1 - Transformers: 4.44.0 - PyTorch: 2.4.0 - Accelerate: 0.33.0 - Datasets: - Tokenizers: 0.19.1 ## Citation ### BibTeX