CEAR / README.md
dotoking's picture
Update README.md
8b43914 verified

A newer version of the Gradio SDK is available: 6.2.0

Upgrade
metadata
title: CEAR โ€“ Cultural Exposure & Algorithmic Risk Analyzer
emoji: ๐Ÿ“ก
colorFrom: blue
colorTo: purple
sdk: gradio
sdk_version: 6.1.0
app_file: app.py
license: mit

CEAR โ€“ Cultural Exposure & Algorithmic Risk Analyzer

CEAR is a transparent, rule-based model and Hugging Face Space that helps users understand their social media habits through three interpretable metrics:

  • Cultural Connectedness (C-Score) โ€“ approximate trend exposure
  • Algorithmic Risk (A-Risk) โ€“ attention concentration in algorithm-driven feeds
  • Diversity Index (D-Index) โ€“ distribution of time across platforms

The project combines an analytic scoring engine with a clean Gradio interface. It does not use machine learning but functions as an interpretable behavioral analysis model.


๐Ÿš€ Live Demo

Use CEAR directly in your browser:

Hugging Face Space: https://huggingface.co/spaces/<your-username>/CEAR


๐Ÿ“ฆ Project Structure

โ”œโ”€โ”€ app.py                   # Gradio interface and interpretation logic
โ”œโ”€โ”€ cear_model.py           # Core scoring engine (C/A/D + efficiency)
โ”œโ”€โ”€ platform_weights.json   # Hand-tuned theoretical platform weights
โ”œโ”€โ”€ requirements.txt        # Dependencies
โ””โ”€โ”€ README.md               # This file

๐ŸŽฏ Purpose

CEAR serves two goals:

  1. Provide an interpretable framework for analyzing social media behavior.
  2. Demonstrate a fully-documented model card, Gradio deployment, and rule-based transform suitable for academic or instructional settings.

It is ideal for:

  • Students learning model documentation
  • Researchers exploring rule-based analytics
  • Users who want insight into their social media patterns

๐Ÿ“ฅ Inputs

CEAR accepts values for 7 platform buckets:

  • TikTok
  • Instagram
  • YouTube
  • Twitter/X
  • Reddit
  • Facebook
  • Other

For each platform, the user inputs:

  • Minutes per week
  • Variety score (0โ€“10)

Global self-reports:

  • Feed satisfaction (0โ€“10)
  • FOMO / Out-of-the-loop feeling (0โ€“10)

Input Validation Rules

  • Platforms with 0 minutes are excluded from calculations.
  • Variety > 0 while minutes = 0 triggers a warning and is ignored.
  • Negative values are treated as zero.

๐Ÿงฎ Model Logic

CEAR is a rule-based model driven by theoretical platform weights.

Platform Weights

Each platform has:

  • W_C โ€“ Cultural Connectedness Weight
  • W_A โ€“ Algorithmic Risk Weight

Defined in platform_weights.json.

Score Calculations

1. C-Score (Cultural Connectedness)

Uses a log transform to encode diminishing returns:

C_contrib = W_C * log10(minutes + 1)

2. A-Risk (Algorithmic Risk)

Linear with respect to time:

A_contrib = W_A * minutes

3. D-Index (Platform Diversity)

Based on the inverse Herfindahl index:

s_i = minutes_i / total_minutes
D_Index = 1 / sum(s_i^2)

4. Per-Platform Cultural Efficiency (0โ€“100)

eff_raw = C_contrib / minutes
normalized = eff_raw / max(eff_raw) * 100

5. Average Variety (Weighted)

Avg_Variety = mean(variety_score, weighted by minutes)

Interpretation Logic

  • Satisfaction and FOMO do not influence the numeric scores.
  • Instead, they shape the narrative summary.

๐Ÿงฉ Output Sections

The app produces three final outputs:

1. CEAR Analysis Summary

Includes:

  • C-Score
  • A-Risk
  • D-Index
  • Average Variety
  • Self-report reflections
  • Warnings for invalid input patterns

2. Interpretation Narrative

A human-readable explanation linking:

  • Platform mix
  • Variety
  • Satisfaction
  • FOMO
  • Risk and connectedness profiles

3. Platform Efficiency Breakdown

Both as:

  • A ranked markdown list (easy to read)
  • A numeric DataFrame (for analysis)

๐Ÿงช Validation

Since CEAR is deterministic, validation focuses on correctness:

  • Unit tests confirm expected behavior for high-concentration vs balanced usage.
  • Manual tests confirm variety weighting, reset logic, and warnings.
  • The scoring formulas are transparent and reproducible.

โš ๏ธ Limitations

CEAR is not a predictive model.

  • It does not infer real cultural exposure.
  • It cannot evaluate actual content.
  • Weights reflect reasonable theoretical assumptions, not empirical fitting.
  • It does not diagnose mental health or prescribe usage patterns.

The model is best used for reflection, education, and exploration.


๐Ÿ”ง Running Locally

You can run the app locally:

pip install -r requirements.txt
python app.py

Or import the model:

from cear_model import CEARModel
import pandas as pd

model = CEARModel()
df = pd.DataFrame([
    {"platform_name": "tiktok", "minutes_per_week": 300, "variety_score": 4},
])
print(model.calculate_scores(df, satisfaction=6, fomo=4))

๐Ÿ“œ License

MIT License


๐Ÿ™Œ Acknowledgments

This project was built to demonstrate:

  • Transparent model design
  • Clear model documentation
  • Proper Hugging Face Space structure
  • User-oriented interpretability

Feel free to fork and extend with:

  • Empirical weights
  • Trend detection
  • Behavioral clustering
  • Recommendation strategies

CEAR v1.0 is a foundation for deeper exploration into how we relate to algorithmic feeds.