andres.salguero commited on
Commit
5f21106
·
1 Parent(s): a38fced

Update project for Hugging Face Spaces

Browse files
Files changed (6) hide show
  1. .gitignore +34 -0
  2. .huggingfaceignore +8 -0
  3. Dockerfile +28 -0
  4. README.md +91 -1
  5. app.py +54 -0
  6. requirements.txt +6 -0
.gitignore ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Python virtual environment
2
+ venv/
3
+ env/
4
+ ENV/
5
+
6
+ # Python bytecode
7
+ __pycache__/
8
+ *.py[cod]
9
+ *$py.class
10
+
11
+ # Distribution / packaging
12
+ dist/
13
+ build/
14
+ *.egg-info/
15
+
16
+ # IDE specific files
17
+ .idea/
18
+ .vscode/
19
+ *.swp
20
+ *.swo
21
+
22
+ # Jupyter Notebook
23
+ .ipynb_checkpoints
24
+
25
+ # Environment variables
26
+ .env
27
+ .env.local
28
+
29
+ # Streamlit specific
30
+ .streamlit/
31
+
32
+ # OS specific
33
+ .DS_Store
34
+ Thumbs.db
.huggingfaceignore ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ .git/
2
+ .gitignore
3
+ .huggingfaceignore
4
+ __pycache__/
5
+ *.pyc
6
+ venv/
7
+ .env
8
+ .DS_Store
Dockerfile ADDED
@@ -0,0 +1,28 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Use Python 3.10 slim image
2
+ FROM python:3.10-slim
3
+
4
+ # Create a non-root user
5
+ RUN useradd -m -u 1000 user
6
+ USER user
7
+ ENV PATH="/home/user/.local/bin:$PATH"
8
+
9
+ # Set working directory
10
+ WORKDIR /app
11
+
12
+ # Copy requirements first to leverage Docker cache
13
+ COPY --chown=user ./requirements.txt requirements.txt
14
+
15
+ # Install dependencies
16
+ RUN pip install --no-cache-dir --user -r requirements.txt
17
+
18
+ # Download NLTK data
19
+ RUN python -c "import nltk; nltk.download('punkt')"
20
+
21
+ # Copy the rest of the application
22
+ COPY --chown=user . /app
23
+
24
+ # Expose the port Hugging Face Spaces uses
25
+ EXPOSE 7860
26
+
27
+ # Command to run the application
28
+ CMD ["streamlit", "run", "app.py", "--server.address", "0.0.0.0", "--server.port", "7860"]
README.md CHANGED
@@ -1 +1,91 @@
1
- # sf_tools
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Bayesian Token Co-occurrence Simulator
2
+
3
+ A Streamlit application that simulates and visualizes token co-occurrence patterns using Bayesian methods.
4
+
5
+ ## Local Development Setup
6
+
7
+ 1. Create and activate a virtual environment:
8
+ ```bash
9
+ # Create virtual environment
10
+ python -m venv venv
11
+
12
+ # Activate virtual environment
13
+ # On macOS/Linux:
14
+ source venv/bin/activate
15
+ # On Windows:
16
+ .\venv\Scripts\activate
17
+ ```
18
+
19
+ 2. Install dependencies:
20
+ ```bash
21
+ pip install -r requirements.txt
22
+ ```
23
+
24
+ 3. Run the application:
25
+ ```bash
26
+ streamlit run app.py
27
+ ```
28
+
29
+ The application will be available at http://localhost:8501
30
+
31
+ ## Docker Setup
32
+
33
+ 1. Build the Docker image:
34
+ ```bash
35
+ docker build -t bayesian-token-simulator .
36
+ ```
37
+
38
+ 2. Run the Docker container:
39
+ ```bash
40
+ docker run -p 8501:8501 bayesian-token-simulator
41
+ ```
42
+
43
+ The application will be available at http://localhost:8501
44
+
45
+ ## Hugging Face Spaces Deployment
46
+
47
+ 1. Clone your Hugging Face Space repository:
48
+ ```bash
49
+ git clone https://huggingface.co/spaces/afscomercial/stf_model
50
+ ```
51
+
52
+ 2. Copy your project files to the cloned repository:
53
+ ```bash
54
+ cp -r app.py requirements.txt Dockerfile .huggingfaceignore stf_model/
55
+ ```
56
+
57
+ 3. Push the changes to Hugging Face:
58
+ ```bash
59
+ cd stf_model
60
+ git add .
61
+ git commit -m "Add Bayesian Token Co-occurrence Simulator"
62
+ git push
63
+ ```
64
+
65
+ Your application will be automatically deployed to https://huggingface.co/spaces/afscomercial/stf_model
66
+
67
+ ## Project Structure
68
+
69
+ - `app.py`: Main Streamlit application
70
+ - `requirements.txt`: Python dependencies
71
+ - `Dockerfile`: Docker configuration
72
+ - `.gitignore`: Git ignore rules
73
+ - `.huggingfaceignore`: Hugging Face ignore rules
74
+
75
+ ## Features
76
+
77
+ - Interactive text input for training sentences
78
+ - Bayesian smoothing parameter adjustment
79
+ - Co-occurrence matrix visualization
80
+ - Next token prediction
81
+ - Real-time updates
82
+
83
+ ## Requirements
84
+
85
+ - Python 3.8+
86
+ - Streamlit
87
+ - NumPy
88
+ - Pandas
89
+ - Matplotlib
90
+ - Seaborn
91
+ - NLTK
app.py ADDED
@@ -0,0 +1,54 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import streamlit as st
2
+ import numpy as np
3
+ import pandas as pd
4
+ import matplotlib.pyplot as plt
5
+ import seaborn as sns
6
+ from nltk.tokenize import word_tokenize
7
+ import nltk
8
+
9
+ nltk.download('punkt')
10
+
11
+ st.title("📊 Bayesian Token Co-occurrence Simulator")
12
+
13
+ # User input
14
+ user_input = st.text_area("✍️ Enter your training sentences (one per line):",
15
+ """
16
+ fido loves the red ball
17
+ timmy and fido go to the park
18
+ fido and timmy love to play
19
+ the red ball is timmy's favorite toy
20
+ """)
21
+
22
+ sentences = user_input.strip().split('\n')
23
+ tokenized = [word_tokenize(s.lower()) for s in sentences if s.strip()]
24
+ vocab = sorted(set(word for sentence in tokenized for word in sentence))
25
+ token2idx = {word: i for i, word in enumerate(vocab)}
26
+ idx2token = {i: word for word, i in token2idx.items()}
27
+
28
+ # Co-occurrence matrix
29
+ window_size = 2
30
+ matrix = np.zeros((len(vocab), len(vocab)))
31
+
32
+ for sentence in tokenized:
33
+ for i, word in enumerate(sentence):
34
+ for j in range(max(0, i - window_size), min(len(sentence), i + window_size + 1)):
35
+ if i != j:
36
+ matrix[token2idx[word]][token2idx[sentence[j]]] += 1
37
+
38
+ alpha = st.slider("🔧 Set Bayesian Prior (α smoothing)", 0.0, 2.0, 0.1)
39
+ posterior = matrix + alpha
40
+
41
+ df = pd.DataFrame(posterior, index=vocab, columns=vocab)
42
+ st.subheader("📈 Co-occurrence Heatmap")
43
+ fig, ax = plt.subplots(figsize=(10, 8))
44
+ sns.heatmap(df, annot=True, cmap="Blues", fmt=".1f", ax=ax)
45
+ st.pyplot(fig)
46
+
47
+ # Next-token prediction
48
+ selected_word = st.selectbox("🔮 Predict next token after:", vocab)
49
+ row = posterior[token2idx[selected_word]]
50
+ probs = row / row.sum()
51
+ prediction = np.random.choice(vocab, p=probs)
52
+
53
+ st.markdown(f"**Predicted next token:** `{prediction}`")
54
+
requirements.txt ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ streamlit==1.32.0
2
+ numpy==1.26.4
3
+ pandas==2.2.1
4
+ matplotlib==3.8.3
5
+ seaborn==0.13.2
6
+ nltk==3.8.1