Credo_AI / README.md
Arko007's picture
Update README.md
06d3ae0 verified
metadata
title: Credo AI
emoji: πŸš€
colorFrom: red
colorTo: red
sdk: docker
app_port: 8501
tags:
  - streamlit
pinned: false
short_description: The Two-Brain Misinformation Detector
license: mit

Credo AI - The Two-Brain Detector emoji: 🧠⚑️ colorFrom: blue colorTo: purple sdk: streamlit sdk_version: 1.33.0 app_file: app.py pinned: false 🧠 Credo AI: The Two-Brain Misinformation Detector Welcome to the live demo for our Hack2Skill Hackathon project!

This application is a proof-of-concept powered by a unique "Two-Brain" AI system designed to provide both rapid and in-depth analysis of news articles and other text. It demonstrates a powerful new architecture for tackling misinformation.

Key Features Live URL & Text Analysis: Paste any text or a URL to a news article to get an instant analysis.

Dual-AI Verdict: Get a fast, high-confidence FAKE/REAL verdict from our specialist model, alongside a nuanced 6-point breakdown from our expert model.

Gemini-Powered Summaries: Receive a clear, conversational summary of the findings, powered by Google's Gemini API.

Analysis History: All of your queries are saved in your session and can be reviewed on the "Analysis History" page.

⚠️ Important Limitations & Prototype Nature This is a hackathon prototype, and while powerful, it has important limitations that highlight key challenges in the field of AI fact-checking.

  1. Pattern Recognition vs. World Knowledge Our AI models are expert pattern detectors, not true fact-checkers with a database of world knowledge. They have been trained to identify the linguistic style, tone, and structure of fake news with incredible accuracy.

However, this means the system can be fooled by simple, declarative statements that are factually incorrect but do not fit the stylistic patterns of fake news.

For example, the model may incorrectly classify "The sun rises in the West" as REAL.

This happens because the statement, while false, is simple and declarative. It doesn't contain the sensationalism, complex grammar, or emotional language that the AI has learned to associate with fake news. This is a classic "common sense" problem in AI and demonstrates that our model is a sophisticated misinformation detector, not a universal truth engine.

  1. Limited Training Data & Domain Bias As a prototype built within a limited timeframe, the models were trained on a relatively small amount of publicly available data. This data was primarily focused on Western political news. As a result, the model's accuracy will be significantly lower on:

Topics outside of politics (e.g., science, finance, health).

News from different geopolitical contexts (e.g., Indian or Asian news).

  1. Future Work These limitations highlight the path forward. The next version of Credo AI would involve:

Integrating a Knowledge Graph: Connecting the AI to a database of real-world facts to overcome the common sense problem.

Training on a Larger, Multi-Domain Dataset: Expanding the training data to include millions of articles from diverse topics and regions to create a truly global tool.

Built by the Data Dragons πŸ‰ for the Hack2Skill Hackathon 2025.