Spaces:
Sleeping
Sleeping
File size: 74,734 Bytes
ab4534a e694533 688efad 65c6e2e 688efad 65c6e2e 688efad ab4534a e694533 688efad e694533 688efad e694533 688efad |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 1420 1421 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457 1458 1459 1460 1461 1462 1463 1464 1465 1466 1467 1468 1469 1470 1471 1472 1473 1474 1475 1476 1477 1478 1479 1480 1481 1482 1483 1484 1485 1486 1487 1488 1489 1490 1491 1492 1493 1494 1495 1496 1497 1498 1499 1500 1501 1502 1503 1504 1505 1506 1507 1508 1509 1510 1511 1512 1513 1514 1515 1516 1517 1518 1519 1520 1521 1522 1523 1524 1525 1526 1527 1528 1529 1530 1531 1532 1533 1534 1535 1536 1537 1538 1539 1540 1541 1542 1543 1544 1545 1546 1547 1548 1549 1550 1551 1552 1553 1554 1555 1556 1557 1558 1559 1560 1561 1562 1563 1564 1565 1566 1567 1568 1569 1570 1571 1572 1573 1574 1575 1576 1577 1578 1579 1580 1581 1582 1583 1584 1585 1586 1587 1588 1589 1590 1591 1592 1593 1594 1595 1596 1597 1598 1599 1600 1601 1602 1603 1604 1605 1606 1607 1608 1609 1610 1611 1612 1613 1614 1615 1616 1617 1618 1619 1620 1621 1622 1623 1624 1625 1626 1627 1628 1629 1630 1631 1632 1633 1634 1635 1636 1637 1638 1639 1640 1641 1642 1643 1644 1645 1646 1647 1648 1649 1650 1651 1652 1653 1654 1655 1656 1657 1658 1659 1660 1661 1662 1663 1664 1665 1666 1667 1668 1669 1670 1671 1672 1673 1674 1675 1676 1677 1678 1679 1680 1681 1682 1683 1684 1685 1686 1687 1688 1689 1690 1691 1692 1693 1694 1695 1696 1697 1698 1699 1700 1701 1702 1703 1704 1705 1706 1707 1708 1709 1710 1711 1712 1713 1714 1715 1716 1717 1718 1719 1720 1721 1722 1723 1724 1725 1726 1727 1728 1729 1730 1731 1732 1733 1734 1735 1736 1737 1738 1739 1740 1741 1742 1743 1744 1745 1746 1747 1748 1749 1750 1751 1752 1753 1754 1755 1756 1757 1758 1759 1760 1761 1762 1763 1764 1765 1766 1767 1768 1769 1770 1771 1772 1773 1774 1775 1776 1777 1778 1779 1780 1781 1782 1783 1784 1785 1786 1787 1788 1789 1790 1791 1792 1793 1794 1795 1796 1797 1798 1799 1800 1801 1802 1803 1804 1805 1806 1807 1808 1809 1810 1811 1812 1813 1814 1815 1816 1817 1818 1819 1820 1821 1822 1823 1824 1825 1826 1827 1828 1829 1830 1831 1832 1833 1834 1835 1836 1837 1838 1839 1840 1841 1842 1843 1844 1845 1846 1847 1848 1849 1850 1851 1852 1853 1854 1855 1856 1857 1858 1859 1860 1861 1862 1863 1864 1865 1866 1867 1868 1869 1870 1871 1872 1873 1874 1875 1876 1877 1878 1879 1880 1881 1882 1883 1884 1885 1886 1887 1888 1889 1890 1891 1892 1893 1894 1895 1896 1897 1898 1899 1900 1901 1902 1903 1904 1905 1906 1907 1908 1909 1910 1911 1912 1913 1914 1915 1916 1917 1918 1919 1920 1921 1922 1923 1924 1925 1926 1927 1928 1929 1930 1931 1932 1933 1934 1935 1936 1937 1938 1939 1940 1941 1942 1943 1944 1945 1946 1947 1948 1949 1950 1951 1952 1953 1954 1955 1956 1957 1958 1959 1960 1961 1962 1963 |
# Devstral + DGX Spark: Phased Implementation Plan
> Incremental approach: prove infrastructure first, then add model support.
## Overview
This plan breaks the Devstral + DGX Spark work into phases that can be validated independently:
0. **Phase 0**: Secure GPU HF Space + verify basic routing (make private, add HF token auth, test auth works)
1. **Phase 0.5**: Fix critical API route routing (backendFetch for key endpoints, prove GPU routing works)
2. **Phase 1**: Deploy existing CodeGen to DGX Spark (prove Docker/GPU infrastructure)
3. **Phase 2**: Add Devstral backend support, test correctness locally
4. **Phase 2b**: Frontend dynamic layer handling
5. **Phase 2c**: Wire Spark into frontend backend router + Deploy Devstral to GPU HF Space
6. **Phase 3**: Deploy Devstral to DGX Spark
7. **Phase 4**: Future enhancements (optional)
---
## Existing Backend Routing Infrastructure
The frontend already has a sophisticated backend routing system that switches between multiple backends based on user settings and environment.
### Current Architecture
**File:** `visualisable-ai/lib/backend-router.ts`
```typescript
export type BackendTier = 'free' | 'premium' | 'research' | 'admin' | 'local';
export interface BackendConfig {
url: string;
wsUrl: string;
tier: BackendTier;
reason: string;
device: 'cpu' | 'gpu' | 'spark';
performance: { inferenceSpeed: string; concurrentUsers: string; };
}
```
**Current Backend Targets:**
| Target | URL | When Used |
|--------|-----|-----------|
| Local | `localhost:8000` | Local mode + Remote NOT enabled |
| CPU HuggingFace | `visualisable-ai-api.hf.space` | Free tier (default) |
| GPU HuggingFace | `visualisable-ai-api-gpu.hf.space` | Premium tier (gpuEnabled=true) |
**Routing Logic (from `getBackendForUser`):**
1. **Local mode + no Remote** → `localhost:8000`
2. **Local mode + Remote + GPU** → GPU HF Space
3. **Local mode + Remote + no GPU** → CPU HF Space
4. **Production + GPU** → GPU HF Space
5. **Production + no GPU** → CPU HF Space
### Admin UI Controls
**File:** `visualisable-ai/app/admin/users/page.tsx`
Two toggles per user:
- **GPU Access** (`gpuEnabled`): Routes to GPU HuggingFace Space
- **Remote** (`backendOverride: 'remote'`): In local mode, switches from localhost to HuggingFace
### Environment Variables
```bash
NEXT_PUBLIC_MODE=local # Enables local mode (shows Remote toggle)
NEXT_PUBLIC_API_URL=http://localhost:8000 # Local backend URL
NEXT_PUBLIC_CPU_BACKEND_URL=... # CPU HuggingFace Space
NEXT_PUBLIC_GPU_BACKEND_URL=... # GPU HuggingFace Space
```
### Current Gap: Server-Side API Routes
**Issue:** The `backend-router.ts` correctly determines the backend URL per-user, but many Next.js API routes use a hardcoded `BACKEND_URL`:
```typescript
// These routes use hardcoded BACKEND_URL (NOT per-user routing):
// - /api/research/attention/analyze/route.ts
// - /api/proxy/[...path]/route.ts
// - /api/demos/route.ts
// - /api/vocabulary/search/route.ts
// etc.
const BACKEND_URL = process.env.BACKEND_URL || 'https://visualisable-ai-api.hf.space';
```
**Result:** Even if a user has `gpuEnabled=true`, server-side API routes still call the CPU Space.
**Fix Required:** API routes need to:
1. Get current user via Clerk
2. Call `getBackendForUser(user)` to get the correct backend URL
3. Use that URL for the fetch
**Resolution:** Phase 0.5 fixes the critical `/api/research/attention/analyze` endpoint to prove routing works. Remaining routes are fixed in Phase 2c.
---
## Phase 0: Secure GPU HF Space + Verify Existing Routing
**Goal:** Before adding Devstral/Spark support, secure the GPU HuggingFace Space to prevent unauthorized wake-ups and cost leakage, then verify the existing CPU/GPU routing works correctly.
### The Problem
Even with API key protection, a **public** HuggingFace Space can be:
- **Discovered** - Anyone can find it on HuggingFace
- **Woken up** - Visiting the URL or hitting any endpoint (even returning 401) wakes a sleeping Space
- **Kept awake** - Repeated requests keep the GPU running and billing
With high-VRAM GPU tiers (L40S at ~$4/hr, A100 at ~$6/hr), this is a real cost risk.
### 0.1 Make GPU HF Space Private
On HuggingFace:
1. Go to your GPU Space settings
2. Change visibility from **Public** to **Private**
3. This prevents discovery and unauthorized access
**Note:** Private Spaces require authentication via HuggingFace token.
**Important caveat:** Making the Space private will reduce random discovery and casual wake-ups, but note that any request that reaches the Space (even one returning 401 Unauthorized) can still wake it, depending on HuggingFace's behavior. Private is still the right move—it prevents casual discovery—but do not assume it is a perfect shield against all wake-ups. The sleep timeout (step 0.4) is the primary defense-in-depth measure.
### 0.2 Add Server-Side HF Token to Vercel
Add a **server-side only** HF token (no `NEXT_PUBLIC_` prefix):
**In Vercel Environment Variables:**
```
HF_TOKEN=hf_xxxxxxxxxxxxxxxxxxxx
```
Generate this token at https://huggingface.co/settings/tokens with read access to your private Space.
**Important:** Do NOT use `NEXT_PUBLIC_HF_TOKEN` - that exposes the token to the client.
### 0.3 Create Server-Only Auth Module
**Why a separate file?** In Next.js, any code imported into client components can end up in the client bundle. `backend-router.ts` contains `getBackendForUser()` which may be imported for URL/tier decisions in client code. If we put `process.env.HF_TOKEN` in the same file, it risks being referenced from client bundles (even if tree-shaken, it's fragile).
**Solution:** Keep `backend-router.ts` as "pure decision logic" (URLs, tiers, reasons) and put all server-only headers in a separate module that is **only imported from API routes**.
**File:** `visualisable-ai/lib/backend-auth.server.ts`
```typescript
import 'server-only'; // Next.js guard: errors if accidentally imported from client
/**
* Server-only authentication headers for backend requests.
*
* IMPORTANT: This file must ONLY be imported from Next.js API routes (server-side).
* Never import this from client components or shared code.
* The 'server-only' import above will cause a build error if this is violated.
*/
// Accept both env var names for backwards compatibility; standardise on API_KEY going forward
const API_KEY = process.env.API_KEY ||
process.env.BACKEND_API_KEY ||
'';
const HF_TOKEN = process.env.HF_TOKEN; // Server-side only, no NEXT_PUBLIC_ prefix
/**
* Get base authentication headers (API key only).
* Use this as the foundation, then add HF token conditionally based on target.
*/
export function getBaseAuthHeaders(): HeadersInit {
const headers: HeadersInit = {
'Content-Type': 'application/json',
};
if (API_KEY) {
headers['X-API-Key'] = API_KEY;
}
return headers;
}
/**
* Get HF-specific auth header (for private Spaces).
* Only attach this when the target is a HuggingFace Space.
*/
export function getHfAuthHeader(): HeadersInit {
return HF_TOKEN ? { Authorization: `Bearer ${HF_TOKEN}` } : {};
}
/**
* Check if a URL is a HuggingFace Space.
*/
export function isHfSpace(url: string): boolean {
return url.includes('.hf.space');
}
```
**Update existing `getBackendHeaders()` in `backend-router.ts`:**
Leave the existing function for backward compatibility, but remove any server-side secrets:
```typescript
// backend-router.ts - keep as client-safe decision logic only
export function getBackendHeaders(): HeadersInit {
// Note: This function returns headers safe for client-side use.
// For server-side requests with API keys/tokens, use backend-auth.server.ts
return {
'Content-Type': 'application/json',
};
}
```
**Rule:** `HF_TOKEN` and `API_KEY` are only used in Next.js API routes (server), never in client code.
### 0.4 Configure Sleep Timeout (Defense in Depth)
On HuggingFace GPU Space settings:
- Set **Sleep timeout** to minimum (e.g., 5 minutes of inactivity)
- This reduces cost if the Space is somehow woken unexpectedly
**Trade-off note for stakeholders:** A 5-minute sleep timeout protects cost but increases cold starts. When a GPU-enabled user makes their first request after the Space has been sleeping, they will experience a delay while the Space wakes up (container restart + model load). For Devstral (~48GB), this cold start can take several minutes. Options to mitigate:
- **Longer timeout** (e.g., 15-30 minutes) - reduces cold starts but increases cost during idle periods
- **"Keep warm" scheduled pings** - a cron job that pings `/health` every few minutes to prevent sleep (increases cost to ~continuous billing)
- **Accept cold starts** - for research/premium users who understand the trade-off
Start with 5 minutes and adjust based on usage patterns and user feedback.
### 0.5 Verify Existing Routing Works
Before proceeding to Phase 1, verify the current CPU/GPU routing is working.
**Note on user-specific tests:** Tests 1 and 2 require testing "as a specific user" because routing depends on Clerk user metadata (`gpuEnabled`). The curl examples cannot easily reproduce this. Use one of these approaches:
1. **Browser test (simplest):** Log in as each user type and trigger the endpoint via the UI, then check backend logs to confirm which backend received the request.
2. **Admin diagnostic endpoint (recommended for automation):** Add a temporary `/api/debug/backend-routing` endpoint that returns the backend URL chosen for the current user:
```typescript
// app/api/debug/backend-routing/route.ts
import { currentUser } from '@clerk/nextjs/server';
import { getBackendForUser } from '@/lib/backend-router';
export async function GET() {
const user = await currentUser();
const backend = getBackendForUser(user);
return Response.json({
tier: backend.tier,
url: backend.url,
device: backend.device,
userEmail: user?.emailAddresses?.[0]?.emailAddress
});
}
```
Then curl with a Clerk session cookie to test routing per-user.
3. **Clerk session token in curl:** If you have tooling to extract a Clerk session token, pass it in the request.
**Test 1: CPU HF Space (free tier user)**
```bash
# Option A: Browser test
# Log in as a user WITHOUT gpuEnabled, trigger analyze, check logs
# Option B: With diagnostic endpoint (if added)
# Log in as free tier user, then:
curl https://your-app.vercel.app/api/debug/backend-routing \
-H "Cookie: __session=<clerk_session_cookie>"
# Expected: tier=free, url=visualisable-ai-api.hf.space
```
**Test 2: GPU HF Space (GPU-enabled user)**
```bash
# Option A: Browser test
# Log in as a user WITH gpuEnabled=true, trigger analyze, check logs
# Option B: With diagnostic endpoint (if added)
# Log in as GPU-enabled user, then:
curl https://your-app.vercel.app/api/debug/backend-routing \
-H "Cookie: __session=<clerk_session_cookie>"
# Expected: tier=premium, url=visualisable-ai-api-gpu.hf.space
```
**Test 3: Private Space rejects unauthenticated requests**
```bash
# Direct request to GPU Space without token should fail
curl https://visualisable-ai-api-gpu.hf.space/health
# Expected: 401 Unauthorized or redirect to login (or HTML login page)
```
**Test 4: Private Space accepts authenticated requests**
```bash
# Direct request with HF token should succeed
curl -H "Authorization: Bearer hf_xxxx" \
https://visualisable-ai-api-gpu.hf.space/health
# Expected: 200 OK
```
**Note on endpoint choice:** These tests use `/health`. Verify your backend actually serves `/health` at the root. Some HF Space setups front a Gradio app or use a different path prefix. If `/health` doesn't exist, substitute any cheap "always exists" endpoint you know is served (even `/` or a simple status endpoint). The goal is to test auth, not the specific endpoint.
**Note on private Space responses:** A private Space may return a redirect or HTML login page rather than a neat JSON 401. Both indicate the unauthenticated request was rejected, which is what we want to verify.
### 0.6 Validation Criteria
- [ ] GPU HF Space set to **Private** on HuggingFace
- [ ] `HF_TOKEN` (server-side only) added to Vercel environment variables
- [ ] `lib/backend-auth.server.ts` created with `getBaseAuthHeaders()`, `getHfAuthHeader()`, `isHfSpace()`
- [ ] `getBackendHeaders()` in `backend-router.ts` cleaned up (no secrets)
- [ ] Sleep timeout configured on GPU Space (5 minutes recommended)
- [ ] **Test:** Direct unauthenticated request to GPU Space returns 401
- [ ] **Test:** Authenticated request via Vercel API routes succeeds
- [ ] **Test:** CPU HF Space still works for free tier users
- [ ] **Test:** GPU-enabled user requests route to GPU Space and succeed
- [ ] No changes to Devstral/Spark yet - existing CodeGen on both Spaces works
---
## Phase 0.5: Fix Critical API Route Routing
**Goal:** Before investing in Spark infrastructure (Phase 1), fix the most critical API routes to use per-user backend routing. This gives you confidence that GPU routing actually works before paying for a bigger GPU tier.
**Why now?** Phase 0 verifies that the routing logic in `getBackendForUser()` is correct and that the private Space accepts authenticated requests. But many API routes still use hardcoded `BACKEND_URL`, so GPU-enabled users may not actually reach the GPU Space. Phase 0.5 fixes this gap for the key endpoints you use to validate.
### 0.5.1 Create Minimal `backendFetch` Helper
**File:** `visualisable-ai/lib/backend-fetch.ts`
This is the **minimal** helper for simple JSON POST calls. Proxy-style routes (method forwarding, query strings, binary bodies, streaming) will be handled separately in Phase 2c with a `backendProxy` helper.
```typescript
import 'server-only'; // Prevent accidental client import
import { auth, currentUser } from '@clerk/nextjs/server';
import { getBackendForUser } from './backend-router';
import { getBaseAuthHeaders, getHfAuthHeader, isHfSpace } from './backend-auth.server';
/**
* Fetch from the backend appropriate for the current user.
*
* This helper:
* 1. Gets the current user via Clerk
* 2. Determines the correct backend (CPU HF, GPU HF, Spark, local)
* 3. Adds authentication headers (API key, and HF token only for HF targets)
*
* Use this in API routes instead of hardcoded BACKEND_URL.
*
* Note: For proxy-style routes that need method/query/body forwarding,
* use backendProxy() instead (added in Phase 2c).
*/
export async function backendFetch(
endpoint: string,
options: RequestInit = {}
): Promise<Response> {
const { userId } = await auth();
const user = userId ? await currentUser() : null;
const backend = getBackendForUser(user);
const url = `${backend.url}${endpoint}`;
return fetch(url, {
...options,
headers: {
...getBaseAuthHeaders(),
...(isHfSpace(backend.url) ? getHfAuthHeader() : {}),
...options.headers,
},
});
}
```
### 0.5.2 Update Critical Endpoints
Choose 1-2 endpoints that you actively use for testing routing:
**Recommended:** `/api/research/attention/analyze` (the main analyze endpoint)
**File:** `visualisable-ai/app/api/research/attention/analyze/route.ts`
```typescript
import { NextRequest, NextResponse } from "next/server";
import { backendFetch } from "@/lib/backend-fetch";
export async function POST(request: NextRequest) {
try {
// For small JSON payloads like this, parse-then-stringify is fine.
// For large/binary/streaming payloads, use backendProxy instead.
const body = await request.json();
const { prompt, max_tokens, temperature } = body;
// Use backendFetch for per-user routing
const response = await backendFetch('/analyze/research/attention', {
method: 'POST',
body: JSON.stringify({
prompt,
max_tokens: max_tokens || 8,
temperature: temperature || 0.7
})
});
if (!response.ok) {
const error = await response.text();
throw new Error(`Backend error: ${error}`);
}
const data = await response.json();
return NextResponse.json(data);
} catch (error) {
console.error("Research attention analysis error:", error);
return NextResponse.json(
{ error: error instanceof Error ? error.message : "Analysis failed" },
{ status: 500 }
);
}
}
```
### 0.5.3 Validation
Re-run the Phase 0 user-specific tests with the updated endpoint:
**Test:** GPU-enabled user's request to `/api/research/attention/analyze` actually reaches GPU HF Space.
How to verify:
1. Add temporary logging in the API route: `console.log('Routing to:', backend.url);`
2. Or check GPU Space logs after triggering a request as a GPU-enabled user.
### 0.5.4 Validation Criteria
- [ ] `lib/backend-fetch.ts` created
- [ ] At least one critical endpoint updated to use `backendFetch`
- [ ] **Test:** GPU-enabled user's analyze request reaches GPU HF Space (verified via logs)
- [ ] **Test:** Free tier user's analyze request still goes to CPU HF Space
- [ ] Remaining API route fixes deferred to Phase 2c (lower priority)
---
## Phase 1: Deploy CodeGen to DGX Spark
**Goal:** Prove the Docker deployment infrastructure works with the existing CodeGen model.
### 1.1 Create Dockerfile
**File:** `Dockerfile`
```dockerfile
# Bump with care, retest CUDA + torch compatibility
FROM nvcr.io/nvidia/pytorch:24.01-py3
WORKDIR /app
RUN apt-get update && apt-get install -y curl && rm -rf /var/lib/apt/lists/*
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
EXPOSE 8000
CMD ["python", "-m", "uvicorn", "backend.model_service:app", "--host", "0.0.0.0", "--port", "8000"]
```
### 1.2 Create Docker Compose
**File:** `docker/compose.spark.yml`
```yaml
services:
visualisable-ai-backend:
build:
context: ..
dockerfile: Dockerfile
# container_name: visualisable-ai-backend # Uncomment for single-instance; leave commented for multi-branch
ports:
- "${PORT:-8000}:8000"
shm_size: "8gb"
volumes:
- ..:/app # Mount repo for dev hot-reload (requires --reload in command)
- /srv/models-cache/huggingface:/srv/models-cache/huggingface:rw # Writable HF cache
- ../runs:/app/runs # Outputs (relative to docker/ folder)
environment:
- HF_HOME=/srv/models-cache/huggingface
- TRANSFORMERS_CACHE=/srv/models-cache/huggingface
- DEFAULT_MODEL=${DEFAULT_MODEL:-codegen-350m}
- API_KEY=${API_KEY}
- HF_TOKEN=${HF_TOKEN}
- HUGGINGFACE_HUB_TOKEN=${HF_TOKEN}
# Operational tuning (included from day one for self-documentation)
- MAX_CONTEXT=${MAX_CONTEXT:-8192}
- BATCH_SIZE=${BATCH_SIZE:-1}
- TORCH_DTYPE=${TORCH_DTYPE:-fp16}
# Uncomment if experiencing CUDA memory fragmentation:
# - PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:512
gpus: all
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8000/health"]
interval: 30s
timeout: 3s
retries: 5
restart: unless-stopped
# Dev mode: uncomment to enable hot-reload
# command: ["python", "-m", "uvicorn", "backend.model_service:app", "--host", "0.0.0.0", "--port", "8000", "--reload"]
```
**Notes:**
- `/srv/models-cache/huggingface` is the writable HF cache directory
- No `/srv/models` mount needed for Phase 1 (CodeGen downloads to cache)
- **Multiple branches:** Use `PORT` and Compose project names to avoid collisions:
```bash
PORT=8001 docker compose -p visai-branch-a -f docker/compose.spark.yml --env-file .env.spark up -d --build
PORT=8002 docker compose -p visai-branch-b -f docker/compose.spark.yml --env-file .env.spark up -d --build
```
### 1.3 Create Environment Template
**File:** `.env.spark.example`
```bash
# DGX Spark Environment Configuration
# Copy to .env.spark and fill in values
# Backend port
PORT=8000
# Default model to load
DEFAULT_MODEL=codegen-350m
# Note: fp16 is recommended for GPU runs (faster, lower VRAM).
# Use fp32 only when debugging numerical issues.
# API key for authentication (generate a secure random string)
API_KEY=your-api-key-here
# HuggingFace token (for gated models)
HF_TOKEN=your-hf-token-here
# Model cache location on Spark (must be writable)
HF_HOME=/srv/models-cache/huggingface
# Operational tuning for large models
MAX_CONTEXT=8192
BATCH_SIZE=1
TORCH_DTYPE=fp16
# Uncomment if experiencing CUDA memory fragmentation:
# PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:512
```
### 1.4 Update .gitignore
**File:** `.gitignore` (append)
```
# Spark deployment
.env.spark
runs/*
!runs/.gitkeep
```
**Create the runs directory with placeholder:**
```bash
mkdir -p runs
touch runs/.gitkeep
git add runs/.gitkeep
```
This ensures the `runs/` folder exists in fresh clones (required by `compose.spark.yml` volume mount `../runs:/app/runs`).
**Important:** Commit `runs/.gitkeep` in the same PR as the `.gitignore` changes.
### 1.5 Ensure /health Returns Fast and Add Debug Endpoints
**CRITICAL:** The `/health` endpoint MUST return immediately (HTTP 200) even while the model is still loading. If it blocks on model load, Compose will mark the container unhealthy during slow Devstral downloads in Phase 3.
Check existing `/health` implementation:
- Should return `{"status": "ok"}` immediately
- Model loading status should be on a separate `/ready` endpoint
If `/health` currently blocks, add a `/ready` endpoint:
- `/health` → process is up (always fast, always 200)
- `/ready` → model is loaded and ready for inference
- Return **200** when model is loaded and ready
- Return **503** when model is still loading (allows `watch` to show clear state change)
**Also add `/debug/device`** in Phase 1 so validation can verify model placement without relying on logs:
- `cuda_available`: whether CUDA is available
- `model_loaded`: whether the model is loaded
- `model_device`: the device the model is on
- `torch_dtype`: the dtype in use
- `model_id`: the loaded model ID
**Security note:** Do not return environment variables, tokens, or other secrets from `/debug/device`.
### 1.6 Spark Prep
On DGX Spark host:
```bash
# Create writable cache directory
sudo mkdir -p /srv/models-cache/huggingface
sudo chown -R root:dgx-ml /srv/models-cache
sudo chmod -R 2775 /srv/models-cache
# Clone repo
cd /srv/projects
git clone <repo> visualisable-ai-backend
cd visualisable-ai-backend
# Create env file
cp .env.spark.example .env.spark
vim .env.spark
```
### 1.7 Test CodeGen on Spark
```bash
# Build and run
docker compose -f docker/compose.spark.yml --env-file .env.spark up -d --build
# Check logs
docker compose -f docker/compose.spark.yml logs -f
# Verify GPU access (deterministic check, not relying on log wording)
docker compose -f docker/compose.spark.yml --env-file .env.spark exec visualisable-ai-backend \
python -c "import torch; print('CUDA available:', torch.cuda.is_available()); print('Device:', torch.cuda.get_device_name(0) if torch.cuda.is_available() else 'no-cuda')"
# Test endpoints
curl http://spark-c691.local:8000/health
curl http://spark-c691.local:8000/ready # Returns 503 until model is loaded, then 200
curl -s http://spark-c691.local:8000/debug/device | python -m json.tool
curl -X POST http://spark-c691.local:8000/analyze/research/attention \
-H "Content-Type: application/json" \
-d '{"prompt": "def hello():", "max_tokens": 5}'
```
### 1.8 Validation Criteria
- [ ] Container starts and `/health` returns 200 **immediately** (before model loads)
- [ ] `/health` remains fast even during model download
- [ ] CodeGen model loads successfully (check logs)
- [ ] `/ready` returns 200 after model is loaded
- [ ] `/analyze/research/attention` returns valid response
- [ ] CUDA is available in container (`torch.cuda.is_available()` returns `True`)
- [ ] Model device verified via `/debug/device` endpoint
- [ ] `.env.spark` is gitignored
---
## Phase 2: Add Devstral Backend Support
**Goal:** Add Devstral model support and validate correctness. This is a **correctness test**, not a performance test.
### 2.1 Add MistralAdapter
**File:** `backend/model_adapter.py`
```python
class MistralAdapter(ModelAdapter):
"""Adapter for Mistral-based models (Devstral, Mistral, etc.)"""
def _get_layers(self):
"""Defensive access: Mistral layers may be nested differently"""
if hasattr(self.model, 'model') and hasattr(self.model.model, 'layers'):
return self.model.model.layers
elif hasattr(self.model, 'layers'):
return self.model.layers
raise AttributeError("Cannot find transformer layers in Mistral model")
def get_num_layers(self) -> int:
return self.model.config.num_hidden_layers
def get_num_heads(self) -> int:
return self.model.config.num_attention_heads
def get_num_kv_heads(self) -> Optional[int]:
return getattr(self.model.config, 'num_key_value_heads', None)
def get_layer_module(self, layer_idx: int):
return self._get_layers()[layer_idx]
def get_attention_module(self, layer_idx: int):
return self._get_layers()[layer_idx].self_attn
def get_mlp_module(self, layer_idx: int):
return self._get_layers()[layer_idx].mlp
def get_qkv_projections(self, layer_idx: int):
attn = self.get_attention_module(layer_idx)
return attn.q_proj, attn.k_proj, attn.v_proj
```
Update factory:
```python
def create_adapter(model, tokenizer, model_id):
config = get_model_config(model_id)
architecture = config["architecture"]
if architecture == "gpt_neox":
return CodeGenAdapter(model, tokenizer, model_id)
elif architecture == "llama":
return CodeLlamaAdapter(model, tokenizer, model_id)
elif architecture == "mistral":
return MistralAdapter(model, tokenizer, model_id)
else:
raise ValueError(f"Unsupported architecture: {architecture}")
```
### 2.2 Add Devstral to Model Config
**File:** `backend/model_config.py`
```python
"devstral-small": {
"hf_path": "mistralai/Devstral-Small-2507",
"display_name": "Devstral Small 24B",
"architecture": "mistral",
"size": "24B",
"num_layers": 40,
"num_heads": 32,
"num_kv_heads": 8,
"vocab_size": 131072,
"context_length": 131072,
"attention_type": "grouped_query",
"requires_gpu": True, # Keep True to steer users to Spark
"min_vram_gb": 48.0,
"min_ram_gb": 96.0
}
```
**Note:** `requires_gpu: True` remains set to guide users toward Spark. CPU inference is technically possible on Mac Studio (512GB RAM) but is painfully slow and not recommended for regular use.
### 2.3 Fix Hardcoded Layer Classification
**File:** `backend/model_service.py` (~line 1505)
```python
# Fixed (percentage-based, 1-indexed fraction for transformer blocks):
layer_fraction = (layer_idx + 1) / n_layers
if layer_idx == 0:
layer_pattern = {"type": "positional", ...}
elif layer_fraction <= 0.25:
layer_pattern = {"type": "previous_token", ...}
elif layer_fraction <= 0.75:
layer_pattern = {"type": "induction", ...}
else:
layer_pattern = {"type": "semantic", ...}
```
### 2.4 Wire Env Vars into Model Loader
**File:** `backend/model_service.py` (in `load_model()` or `ModelManager.__init__`)
Ensure the backend reads and applies these environment variables:
- `MAX_CONTEXT`: caps input truncation (tokenizer max_length). If requests include `max_new_tokens`, do not silently override it unless you explicitly want global caps—this prevents confusion when callers expect per-request control.
- `BATCH_SIZE`: wire in where applicable; otherwise leave as reserved for future batching (only meaningful if the service implements request batching)
- `TORCH_DTYPE`: map string to dtype:
- `bf16` → `torch.bfloat16`
- `fp16` → `torch.float16`
- `fp32` → `torch.float32`
### 2.5 Add `/models` and `/models/current` Endpoints
**File:** `backend/model_service.py`
These endpoints are required by the frontend (Phase 2b.4) and for validation (Phase 2c). Add them as explicit Phase 2 deliverables:
**`GET /models`** - List available models:
```python
@app.get("/models")
def list_models():
"""Return list of models this backend can serve."""
return {
"models": [
{
"id": model_id,
"name": config["display_name"],
"available": is_model_available(model_id), # Check VRAM, etc.
"requires_gpu": config.get("requires_gpu", False)
}
for model_id, config in SUPPORTED_MODELS.items()
]
}
```
**`GET /models/current`** - Return currently loaded model:
```python
@app.get("/models/current")
def current_model():
"""Return info about the currently loaded model."""
if not model_manager.model_loaded:
return {"id": None, "device": None, "dtype": None}
return {
"id": model_manager.model_id,
"device": str(model_manager.device),
"dtype": str(model_manager.dtype)
}
```
**Why explicit deliverables?** Phase 2c validation depends on these endpoints. Making them "if missing" creates ambiguity. By adding them in Phase 2, the frontend work in 2b and validation in 2c can proceed cleanly.
### 2.6 Local Validation (Correctness Only)
**Option A: Full load on Mac Studio (slow, ~96GB RAM needed)**
```bash
export DEFAULT_MODEL=devstral-small
export HF_TOKEN=your-token-here
python -m uvicorn backend.model_service:app --host 0.0.0.0 --port 8000
# Test (will be VERY slow on CPU)
curl -X POST http://localhost:8000/analyze/research/attention \
-H "Content-Type: application/json" \
-d '{"prompt": "def hello():", "max_tokens": 2}'
```
**Option B: Unit test without full model load**
Write a test that:
1. Loads model config, verifies 40 layers
2. Checks MistralAdapter layer access pattern
3. Validates layer classification fractions
### 2.7 Validation Criteria
- [ ] Devstral config added to SUPPORTED_MODELS
- [ ] MistralAdapter correctly accesses layers
- [ ] Layer classification works for 40-layer model (percentage-based)
- [ ] Env vars (MAX_CONTEXT, BATCH_SIZE, TORCH_DTYPE) are wired into loader
- [ ] `/models` endpoint returns list of available models
- [ ] `/models/current` endpoint returns currently loaded model info
- [ ] One successful endpoint call (correctness, not performance)
---
## Phase 2b: Frontend Dynamic Layer Handling
**Goal:** Update frontend to handle models with different layer counts and vocab sizes.
### 2b.1 Fix Stage Boundaries
**File:** `components/research/VerticalPipeline.tsx`
Replace hardcoded layer boundaries with percentage-based:
```typescript
// Current (hardcoded for 20 layers):
const getStageInfo = (layerIdx: number) => {
if (layerIdx === 0) return { color: 'yellow', label: 'EMBEDDING' };
if (layerIdx <= 5) return { color: 'green', label: 'EARLY' };
if (layerIdx <= 14) return { color: 'blue', label: 'MIDDLE' };
if (layerIdx <= 19) return { color: 'purple', label: 'LATE' };
return { color: 'orange', label: 'OUTPUT' };
};
// Fixed (percentage-based):
const getStageInfo = (layerIdx: number, totalLayers: number) => {
if (layerIdx === 0) return { color: 'yellow', label: 'EMBEDDING' };
const fraction = layerIdx / totalLayers;
if (fraction <= 0.25) return { color: 'green', label: 'EARLY' };
if (fraction <= 0.75) return { color: 'blue', label: 'MIDDLE' };
return { color: 'purple', label: 'LATE' };
};
```
Update layer slice operations:
```typescript
const earlyEnd = Math.floor(numLayers * 0.25);
const middleEnd = Math.floor(numLayers * 0.75);
// EARLY LAYERS
{layersData.slice(1, earlyEnd + 1).map(...)}
// MIDDLE LAYERS
{layersData.slice(earlyEnd + 1, middleEnd + 1).map(...)}
// LATE LAYERS (JS slice end is exclusive)
{layersData.slice(middleEnd + 1, numLayers + 1).map(...)}
```
### 2b.2 Fix Hardcoded Vocabulary Display
**File:** `components/research/VerticalPipeline.tsx` (line ~305)
Replace `(51,200 tokens)` with dynamic value from `modelInfo.vocabSize`.
### 2b.3 Fix Hardcoded head_dim
**File:** `components/research/SpreadsheetGrid.tsx` (if exists)
Replace `const dHead = 64` with dynamic calculation:
```typescript
const dHead = modelInfo.hiddenSize / modelInfo.numHeads;
if (!Number.isInteger(dHead)) {
console.warn("Non-integer head_dim", { hiddenSize: modelInfo.hiddenSize, numHeads: modelInfo.numHeads });
}
```
### 2b.4 Dynamic Model List from Backend
If the frontend model selector is a static list, update it to populate dynamically from the backend `/models` endpoint (or similar). This ensures:
- Models only appear when actually available on the connected backend
- Devstral only shows when connected to Spark (not HuggingFace)
If the frontend already fetches `supported_models` from the backend, this is naturally handled.
### 2b.5 Validation Criteria
- [ ] Stage boundaries work correctly for 40-layer model
- [ ] Vocab display shows correct value for each model
- [ ] head_dim calculated dynamically (if applicable)
- [ ] UI renders correctly with both CodeGen (20 layers) and Devstral (40 layers)
- [ ] Model selector only shows models available on the connected backend (requires Phase 2c for full test)
---
## Phase 2c: Wire Spark into Frontend Backend Router
**Goal:** Add DGX Spark as a fourth backend option in the existing routing infrastructure, and fix server-side API routes to respect per-user backend selection.
**Dependency:** Phase 2 must be complete (Devstral support merged, `/models` and `/models/current` endpoints added) before enabling Devstral as the `DEFAULT_MODEL` on the GPU HF Space.
### Important Network Constraint
**Spark is a local-network-only backend.** The hostname `spark-c691.local` is only resolvable on your local network (mDNS).
| Environment | Can reach Spark? | Notes |
|-------------|------------------|-------|
| Local dev (your machine) | ✅ Yes | Same LAN as Spark |
| Vercel production | ❌ No | Cannot resolve `.local` hostnames |
| HuggingFace Spaces | ❌ No | Cannot resolve `.local` hostnames |
**Implications:**
- Spark toggle is a **developer/research feature** for local mode only
- Production GPU users should use the **GPU HuggingFace Space** (via `gpuEnabled` toggle)
- Do NOT expose Spark to the public internet without proper security (VPN, auth, etc.)
**Spark authentication:** Spark requests are authenticated via `X-API-Key` header (same as local backend). The HF token is only for `.hf.space` targets and is not sent to Spark. For additional security, consider network-level protection (VPN/Tailscale), but API key alone is sufficient for LAN-only access.
**Fallback when Spark is unreachable:** No automatic fallback initially; fail fast with a user-visible error message and a quick toggle to switch to Remote/Local. This keeps behaviour predictable—users should always know which backend they are hitting. Automatic fallback could be added later if needed, but explicit is safer for v1.
**Important:** Production deployments (Vercel) must NOT set `NEXT_PUBLIC_MODE=local`, otherwise Spark routing could incorrectly activate. Only set this in local development `.env.local` files.
**Backend routing summary:**
| Toggle | Production (Vercel) | Local Mode |
|--------|---------------------|------------|
| Neither | CPU HF Space | localhost:8000 |
| Remote | CPU HF Space | CPU HF Space |
| Remote + GPU | GPU HF Space | GPU HF Space |
| Spark | ❌ Invalid | spark-c691.local:8000 |
### 2c.1 Update Backend Router
**File:** `visualisable-ai/lib/backend-router.ts`
Add Spark URL constant:
```typescript
const SPARK_BACKEND_URL = process.env.NEXT_PUBLIC_SPARK_BACKEND_URL ||
'http://spark-c691.local:8000';
```
Update `BackendConfig.device` type to include Spark:
```typescript
device: 'cpu' | 'gpu' | 'spark';
```
Add helper for safe WebSocket URL construction:
```typescript
function toWsUrl(httpUrl: string, wsPath: string = '/ws'): string {
try {
const url = new URL(httpUrl);
url.protocol = url.protocol === 'https:' ? 'wss:' : 'ws:';
url.pathname = url.pathname.replace(/\/$/, '') + wsPath;
return url.toString();
} catch {
// Fallback for malformed URLs
return httpUrl.replace(/^https:/, 'wss:').replace(/^http:/, 'ws:') + wsPath;
}
}
```
**Note:** All current backends (localhost, HuggingFace Spaces, Spark) use `/ws` as the WebSocket path. If a future backend uses a different path, pass it as the second argument.
Update `getBackendForUser` to handle Spark routing:
```typescript
export function getBackendForUser(user: User | null): BackendConfig {
const isLocalMode = process.env.NEXT_PUBLIC_MODE === 'local';
const localBackendUrl = process.env.NEXT_PUBLIC_API_URL || 'http://localhost:8000';
// Check user settings
const hasRemoteOverride = user?.unsafeMetadata?.backendOverride === 'remote';
const hasSparkOverride = user?.unsafeMetadata?.backendOverride === 'spark';
const hasGPUAccess = user?.unsafeMetadata?.gpuEnabled === true;
// SPARK MODE: Only valid in local mode (Spark not reachable from Vercel)
// Spark toggle is a developer/research feature for local network only
if (hasSparkOverride && isLocalMode) {
return {
url: SPARK_BACKEND_URL,
wsUrl: toWsUrl(SPARK_BACKEND_URL),
tier: 'research',
reason: 'DGX Spark backend (local network)',
device: 'spark',
performance: {
inferenceSpeed: '50-200ms',
concurrentUsers: '10+'
}
};
}
// LOCAL MODE: Check if we should use localhost
if (isLocalMode && !hasRemoteOverride) {
return {
url: localBackendUrl,
wsUrl: toWsUrl(localBackendUrl),
tier: 'local' as BackendTier,
reason: 'Local development',
device: 'cpu',
performance: {
inferenceSpeed: 'Variable (local)',
concurrentUsers: 'Unlimited (local)'
}
};
}
// ... rest of existing logic (GPU HF, CPU HF)
}
```
**Note:** Spark routing is gated by `isLocalMode` - even if a user has `backendOverride: 'spark'` in production, it will fall through to the HuggingFace backends.
**Optional extra safety:** If you want to ensure Spark is never accidentally chosen server-side (e.g., during SSR in local mode), add a client-side check:
```typescript
if (hasSparkOverride && isLocalMode && typeof window !== 'undefined') {
// Only route to Spark from client-side code
}
```
This is optional since SSR typically doesn't make backend calls, but provides defense-in-depth.
**Belt-and-braces option:** Since `NEXT_PUBLIC_MODE` is baked into the client bundle at build time, you could add a runtime hostname check as additional defense:
```typescript
const isLocalHost = typeof window !== 'undefined' &&
(window.location.hostname === 'localhost' || window.location.hostname === '127.0.0.1');
if (hasSparkOverride && isLocalMode && isLocalHost) {
// Spark only available when actually running locally
}
```
This prevents Spark routing even if someone accidentally deploys a local-mode build.
### 2c.2 Update Admin UI
**File:** `visualisable-ai/app/admin/users/page.tsx`
Add a third toggle for Spark backend with **mutual exclusivity** (enabling Spark clears Remote, and vice versa):
```typescript
const toggleSparkBackend = async (userId: string, currentValue: boolean) => {
const user = users.find(u => u.id === userId);
if (!user) return;
const newValue = !currentValue;
// Optimistically update UI - clear Remote if enabling Spark
setUsers(prevUsers => prevUsers.map(u => {
if (u.id === userId) {
return {
...u,
unsafeMetadata: {
...u.unsafeMetadata,
// Mutual exclusivity: Spark and Remote cannot both be set
backendOverride: newValue ? 'spark' : undefined
}
};
}
return u;
}));
// ... API call to persist (same pattern as toggleRemoteBackend)
};
// Also update toggleRemoteBackend to clear Spark when enabling Remote:
const toggleRemoteBackend = async (userId: string, currentValue: boolean) => {
// ... existing code ...
// Mutual exclusivity: backendOverride can only be 'remote', 'spark', or undefined
backendOverride: newValue ? 'remote' : undefined
};
```
**Only show Spark toggle in local mode** (it's not useful in production):
```tsx
{isLocalMode && (
<th className="px-6 py-3 text-left text-xs font-medium text-gray-400 uppercase tracking-wider">
Spark
</th>
)}
// In row:
{isLocalMode && (
<td className="px-6 py-4 whitespace-nowrap">
<button
onClick={() => toggleSparkBackend(user.id, hasSparkOverride)}
className={`relative inline-flex h-6 w-11 items-center rounded-full transition-colors cursor-pointer hover:opacity-80 ${
hasSparkOverride ? 'bg-orange-600' : 'bg-gray-700'
}`}
title="Use DGX Spark backend (requires local network access)"
>
<span className={`inline-block h-4 w-4 transform rounded-full bg-white transition-transform ${
hasSparkOverride ? 'translate-x-6' : 'translate-x-1'
}`} />
</button>
</td>
)}
```
### 2c.3 Fix Server-Side API Routes
**Critical:** Some API routes bypass per-user routing by using hardcoded `BACKEND_URL`.
**Routes already correct** (use `getBackendForUser()` + `getBackendHeaders()`):
- `app/api/generate/route.ts` ✅
- `app/api/swe-bench/route.ts` ✅
**Routes to update:**
- `app/api/research/attention/analyze/route.ts`
- `app/api/proxy/[...path]/route.ts`
- `app/api/demos/route.ts`
- `app/api/demos/run/route.ts`
- `app/api/vocabulary/search/route.ts`
- `app/api/vocabulary/browse/route.ts`
- `app/api/token/metadata/route.ts`
- `app/api/backend/[...path]/route.ts`
**Pattern to apply:**
By Phase 2c, `lib/backend-fetch.ts` already exists (created in Phase 0.5). Use the appropriate helper:
- **`backendFetch(endpoint, options)`** - For simple JSON POST calls (most routes)
- **`backendProxy(request, endpointPath)`** - For pass-through proxy routes (added below)
**For simple JSON routes:**
```typescript
import { backendFetch } from '@/lib/backend-fetch';
export async function POST(request: NextRequest) {
const body = await request.json();
const response = await backendFetch('/some/endpoint', {
method: 'POST',
body: JSON.stringify(body)
});
// ...
}
```
**For proxy routes** (e.g., `/api/proxy/[...path]`, `/api/backend/[...path]`):
Add `backendProxy` to `lib/backend-fetch.ts` (extending the file created in Phase 0.5):
```typescript
// lib/backend-fetch.ts - ADD to existing file (imports already present from Phase 0.5)
// Add this import at the top:
import { NextRequest } from 'next/server';
/**
* Proxy a request to the backend with full pass-through.
*
* Handles:
* - Method forwarding (GET, POST, PUT, DELETE, etc.)
* - Query string forwarding
* - Body forwarding (including binary)
* - Header pass-through (excluding hop-by-hop headers)
* - Returns raw Response for streaming
*
* Use for catch-all proxy routes like /api/proxy/[...path].
*
* @param request - The incoming Next.js request
* @param endpointPath - Path to forward to (must NOT include query string)
*/
export async function backendProxy(
request: NextRequest,
endpointPath: string
): Promise<Response> {
const { userId } = await auth();
const user = userId ? await currentUser() : null;
const backend = getBackendForUser(user);
// Build URL with query string from original request
// Note: endpointPath should be a clean path without query string
const url = new URL(endpointPath, backend.url);
url.search = request.nextUrl.search;
// Headers to exclude:
// - hop-by-hop headers (not meant to be forwarded)
// - auth headers (we add our own server-side auth, don't leak client tokens)
// - proxy/CDN headers (avoid confusing upstream, keep logs clean)
// - content-length (let fetch recalculate for streaming body)
const excludeHeaders = new Set([
'host', 'connection', 'keep-alive', 'transfer-encoding',
'te', 'trailer', 'upgrade', 'proxy-authorization', 'proxy-authenticate',
'authorization', 'cookie', // Don't forward client auth to backend
'x-forwarded-for', 'x-forwarded-proto', 'x-forwarded-host', // Proxy headers
'cf-connecting-ip', 'cf-ray', 'cf-ipcountry', // Cloudflare headers
'content-length' // Let fetch set this for streaming body
]);
// Forward headers (except hop-by-hop)
const forwardHeaders: HeadersInit = {};
request.headers.forEach((value, key) => {
if (!excludeHeaders.has(key.toLowerCase())) {
forwardHeaders[key] = value;
}
});
// Merge with auth headers (auth headers take precedence)
// Only attach HF token for HuggingFace Space targets
const headers = {
...forwardHeaders,
...getBaseAuthHeaders(),
...(isHfSpace(backend.url) ? getHfAuthHeader() : {}),
};
// Forward body for methods that have one
const hasBody = !['GET', 'HEAD'].includes(request.method);
const body = hasBody ? request.body : undefined;
return fetch(url.toString(), {
method: request.method,
headers,
body,
// @ts-expect-error: duplex is required for streaming body but not in types
duplex: hasBody ? 'half' : undefined,
});
}
```
**Usage in proxy routes:**
```typescript
import { NextRequest } from 'next/server';
import { backendProxy } from '@/lib/backend-fetch';
// IMPORTANT: Use Node runtime for streaming body support (duplex: 'half')
export const runtime = 'nodejs';
// app/api/proxy/[...path]/route.ts
export async function GET(request: NextRequest, { params }: { params: { path: string[] } }) {
// params.path is clean (no query string) - query comes from request.nextUrl.search
const endpointPath = '/' + params.path.join('/');
return backendProxy(request, endpointPath);
}
export async function POST(request: NextRequest, { params }: { params: { path: string[] } }) {
const endpointPath = '/' + params.path.join('/');
return backendProxy(request, endpointPath);
}
// ... same for PUT, DELETE, etc.
```
**Implementation notes:**
- **Runtime requirement:** **All** routes using `backendProxy` must use `export const runtime = 'nodejs'` because `request.body` streaming with `duplex: 'half'` requires Node (not Edge). This includes `/api/proxy/[...path]`, `/api/backend/[...path]`, and any other catch-all proxy routes.
- **Authentication is centralized:** Both helpers use `getBaseAuthHeaders()` (API key) and conditionally add `getHfAuthHeader()` (HF token) based on `isHfSpace()` check.
- **HF token only for HF backends:** The `isHfSpace()` check ensures the HF token is only sent to `.hf.space` URLs. This keeps Spark and localhost logs clean and avoids sending credentials to non-HF targets.
- **Streaming works automatically:** `backendProxy` returns the raw `Response` without consuming the body.
- **Body handling:** Uses `request.body` directly (ReadableStream) with `duplex: 'half'` for streaming request bodies.
### 2c.4 Add Environment Variables
**File:** `visualisable-ai/.env.local` (local development only)
```bash
# DGX Spark backend URL (for local network access)
NEXT_PUBLIC_SPARK_BACKEND_URL=http://spark-c691.local:8000
# Enable local mode (shows Spark toggle, allows localhost backend)
NEXT_PUBLIC_MODE=local
```
**File:** `visualisable-ai/.env.example` (document but don't set values)
```bash
# DGX Spark backend URL (for local network access)
# NEXT_PUBLIC_SPARK_BACKEND_URL=http://spark-c691.local:8000
# Local mode - ONLY set in .env.local, NEVER in production
# NEXT_PUBLIC_MODE=local
```
**⚠️ CRITICAL: Do NOT define `NEXT_PUBLIC_MODE` in Vercel**
This is a belt-and-braces safety measure:
- Only define `NEXT_PUBLIC_MODE=local` in `.env.local` (local development)
- **Never** add it to Vercel environment variables
- This makes accidental Spark exposure impossible, even if someone toggles user metadata incorrectly
If `NEXT_PUBLIC_MODE` is undefined in production, Spark routing is disabled regardless of user settings.
### 2c.5 Update TierIndicator (Optional)
**File:** `visualisable-ai/components/TierIndicator.tsx`
Add Spark-specific display if the component shows current backend:
```typescript
if (device === 'spark') {
return { icon: <Cpu />, label: 'Spark', color: 'orange' };
}
```
### 2c.6 Toggle Behavior Notes
The three toggles should be mutually exclusive for `backendOverride`:
- **Remote** → `backendOverride: 'remote'` (uses HuggingFace)
- **Spark** → `backendOverride: 'spark'` (uses DGX Spark, local mode only)
- **Neither** → `backendOverride: undefined` (uses localhost in local mode)
**GPU Access** remains independent—it controls which HuggingFace Space to use when Remote is enabled.
The code in 2c.2 handles mutual exclusivity by using a single `backendOverride` field that can only hold one value.
### 2c.7 Verify /models Endpoints (Added in Phase 2)
The frontend model selector (Phase 2b.4) depends on the `/models` and `/models/current` endpoints added in Phase 2.5. Verify these endpoints work correctly on all backends and return:
```json
{
"models": [
{
"id": "codegen-350m",
"name": "CodeGen 350M",
"available": true,
"requires_gpu": false
},
{
"id": "devstral-small",
"name": "Devstral Small 24B",
"available": true,
"requires_gpu": true
}
]
}
```
**Model availability by backend:**
| Model | CPU HF Space | GPU HF Space | Spark |
|-------|--------------|--------------|-------|
| CodeGen | ✅ available (default) | ✅ available | ✅ available |
| Devstral | ❌ unavailable | ✅ available (default) | ✅ available |
**Production model strategy:**
- **CPU HF Space**: CodeGen only (free tier users)
- **GPU HF Space**: Devstral as default (GPU-enabled users get Devstral automatically)
- **Spark**: Both models available (local development/research)
**Verify `/models/current` endpoint** (added in Phase 2.5) returns the currently loaded model:
```json
{
"id": "devstral-small",
"device": "cuda",
"dtype": "bf16"
}
```
This is used for:
- Frontend to know which model is active without parsing `/models` list
- Debugging to quickly verify which model a backend is running
- The model_id acceptance test in Phase 2c validation
### 2c.8 Configure GPU HuggingFace Space for Devstral
**Prerequisites:** The GPU HF Space must have sufficient hardware to run Devstral.
**Minimum hardware:**
- L40S (48GB VRAM) - minimum viable
- A100 (80GB VRAM) - recommended for headroom
**Environment configuration for GPU HF Space:**
```bash
DEFAULT_MODEL=devstral-small
TORCH_DTYPE=bf16
```
**How it works:**
1. User has `gpuEnabled=true` in their profile
2. Frontend router sends requests to GPU HF Space URL
3. GPU HF Space has `DEFAULT_MODEL=devstral-small`, so Devstral loads on startup
4. `/models` endpoint returns `devstral-small` with `available: true`
5. User automatically uses Devstral without touching model selector
**Backend decides default (Approach 1 - recommended):**
The simplest approach is to let each backend decide its own default model via `DEFAULT_MODEL` environment variable:
- CPU HF Space: `DEFAULT_MODEL=codegen-350m`
- GPU HF Space: `DEFAULT_MODEL=devstral-small`
No frontend logic needed - GPU-enabled users automatically get Devstral because that's what the GPU backend loads.
**Important: Frontend must not force a model_id**
For this to work, the frontend must NOT hardcode `model_id=codegen-350m` in API requests. Either:
1. **Omit `model_id`** from requests entirely - backend uses `DEFAULT_MODEL`
2. **Use backend's reported default** - fetch from `/models/current` or `/models` endpoint
3. **Respect user selection** - if user explicitly picks a model, use that
Check existing API calls (e.g., `/analyze/research/attention`, `/generate`) to ensure they don't always send a static `model_id`. If they do, update them to omit it or use the backend's default.
**Verification steps (do these in 2c-Step-1):**
1. **Grep for hardcoded model_id:** Search the Next.js app for `model_id`, `codegen`, and `codegen-350m` to find any hardcoded references.
2. **Check backend default behaviour:** Confirm the backend uses `DEFAULT_MODEL` when `model_id` is omitted from requests. Test with a curl that omits `model_id` and verify it uses the expected default.
### 2c.9 HuggingFace Space Deployment Mechanics
**How deployment works:** The backend is deployed to HuggingFace Spaces via GitHub Actions.
1. **Repository:** Backend code lives in `visualisable-ai-backend` repo
2. **Trigger:** Push to `main` branch triggers GitHub Actions workflow
3. **Workflow:** `.github/workflows/security-check.yml` (job: `deploy-to-huggingface`) pushes code to both HF Space git remotes
4. **Space rebuild:** HuggingFace automatically rebuilds the Space when it receives the push
**Current deployment targets:**
- **CPU Space:** `visualisable-ai/api` → `https://huggingface.co/spaces/visualisable-ai/api`
- **GPU Space:** `visualisable-ai/api-gpu` → `https://huggingface.co/spaces/visualisable-ai/api-gpu`
**Key files:**
- `.github/workflows/security-check.yml` - security checks + deployment workflow
- `Dockerfile` - HF Space build configuration (already exists in repo root)
- Space settings on HuggingFace - environment variables, hardware tier, visibility
**To deploy Devstral to GPU HF Space:**
1. Ensure Phase 2 changes (Devstral support) are merged to `main`
2. GitHub Actions deploys to the Space automatically
3. In HuggingFace Space settings:
- Set `DEFAULT_MODEL=devstral-small`
- Set `TORCH_DTYPE=bf16`
- Upgrade hardware tier to L40S (48GB) or A100 (80GB)
- Ensure Space is **Private** (from Phase 0)
4. Space rebuilds and loads Devstral on startup
**Secrets configuration:**
- HuggingFace Space variables are set in Space Settings > Variables
- GitHub Actions secrets (for pushing to HF) are in repo Settings > Secrets
- Vercel env vars (for API routes) are separate from HF Space vars
### 2c.10 Recommended Implementation Order
To reduce risk, implement Phase 2c in two sub-steps:
**2c-Step-1: Fix per-user routing (CPU HF vs GPU HF)**
- Create `lib/backend-fetch.ts` helper
- Update all API routes to use `backendFetch`
- Test: GPU toggle correctly routes to GPU HuggingFace Space
- This is pure production correctness, no new features
**2c-Step-2: Add Spark as extra backend option**
- Add Spark to `backend-router.ts` (gated by local mode)
- Add Spark toggle to admin UI (local mode only)
- Test: Spark toggle routes to `spark-c691.local:8000`
- This is a local-only developer feature
### 2c.11 Validation Criteria
**Step 1 (Production correctness):**
- [ ] `lib/backend-fetch.ts` helper created (with `backendProxy` for proxy routes)
- [ ] **All proxy routes** have `export const runtime = 'nodejs'`
- [ ] **All API routes updated** to use per-user backend routing (no more hardcoded `BACKEND_URL`)
- [ ] **Grep verification:** No hardcoded `model_id=codegen-350m` found in frontend code
- [ ] **Backend verification:** Backend uses `DEFAULT_MODEL` when `model_id` is omitted (test with curl)
- [ ] **Acceptance test:** Enable GPU toggle (with Remote), confirm requests go to GPU HuggingFace Space
- [ ] `/models` endpoint exists on backend and returns available models
- [ ] GPU HF Space configured with `DEFAULT_MODEL=devstral-small` and sufficient VRAM (L40S minimum)
- [ ] **Acceptance test:** GPU-enabled user in production automatically uses Devstral (no model selector interaction needed)
- [ ] **Acceptance test (model_id verification):** As a GPU-enabled user in production:
1. Call `/models/current` via your Vercel API route (or hit GPU HF Space directly with auth)
2. Expect: `id=devstral-small`, `device=cuda`, `dtype=bf16`
3. This proves no hidden `model_id=codegen-350m` is being sent and Devstral is active
**Step 2 (Spark local-only feature):**
- [ ] `NEXT_PUBLIC_SPARK_BACKEND_URL` environment variable added
- [ ] Backend router recognizes `backendOverride: 'spark'` (only in local mode)
- [ ] Admin UI shows Spark toggle (only in local mode)
- [ ] Spark toggle is mutually exclusive with Remote toggle
- [ ] TierIndicator shows correct status for Spark connection
- [ ] **Acceptance test (local mode):** Enable Spark toggle, confirm requests go to `spark-c691.local:8000`
- [ ] **Acceptance test (local mode):** Switch between Local/Remote/Spark, confirm correct backend is used each time
- [ ] **Acceptance test (production):** Spark toggle has no effect (falls through to HuggingFace)
---
## Phase 3: Deploy Devstral to DGX Spark
**Goal:** Run Devstral on DGX Spark with GPU acceleration (BF16).
### 3.1 Update Spark Environment
```bash
# On Spark, update .env.spark
DEFAULT_MODEL=devstral-small
TORCH_DTYPE=bf16
MAX_CONTEXT=8192
BATCH_SIZE=1
```
### 3.2 Rebuild and Deploy
```bash
cd /srv/projects/visualisable-ai-backend
git pull
docker compose -f docker/compose.spark.yml --env-file .env.spark up -d --build
```
### 3.3 Monitor First Load
First load will download ~48GB model weights. Monitor with:
```bash
# Watch logs
docker compose -f docker/compose.spark.yml logs -f
# Check health (should return fast even during download)
watch -n 5 'curl -s http://spark-c691.local:8000/health'
# Check readiness (will fail until model loaded)
watch -n 10 'curl -s http://spark-c691.local:8000/ready'
```
**Note:** First download can take a significant amount of time depending on network speed. Disk usage will spike in `/srv/models-cache/huggingface` during download (~48GB for Devstral weights). Ensure sufficient disk space is available before starting.
### 3.4 Verify Model on GPU
Use the `/debug/device` endpoint (added in Phase 1.5) to verify the model is on GPU:
```bash
curl -s http://spark-c691.local:8000/debug/device | python -m json.tool
```
Expected response should show `model_device: "cuda:0"` (or similar CUDA device).
**Why not `python -c` exec?** Importing the module in a separate process creates a fresh manager instance with no model loaded—it won't reflect the state of the running Uvicorn process. An HTTP endpoint queries the actual running service.
### 3.5 Validation Criteria
- [ ] `/health` returns 200 fast even during model download
- [ ] `/ready` returns 200 after model is loaded
- [ ] Devstral loads on GPU (verified via deterministic check, not just logs)
- [ ] Memory usage is ~48GB VRAM (BF16)
- [ ] Inference is fast (GPU-accelerated, <5s for small prompts)
- [ ] Analysis endpoint works with Devstral
- [ ] Frontend displays 40 layers correctly with proper stage labels
---
## Phase 4: Future Enhancements (Optional)
**Note:** Devstral on GPU HuggingFace Space is now a **required** part of Phase 2c (for GPU-enabled production users). This phase covers additional optional enhancements.
### 4.1 Runtime Model Switching
**Current approach:** One-model-per-deployment. Each backend loads a single model on startup via `DEFAULT_MODEL` environment variable. This is simpler and keeps memory predictable.
**Future option:** Add `POST /models/load` endpoint for runtime model switching:
```python
@app.post("/models/load")
def load_model(model_id: str):
"""Load a different model at runtime."""
# Unload current model
# Load new model
# Return new model info
```
**Trade-offs:**
- Useful for research (switch models without redeploying)
- Adds complexity: queueing, load state management, eviction, edge cases (requests arriving mid-load)
- Memory management becomes more complex with multiple large models
**Recommendation:** Keep one-model-per-deployment for v1. Add runtime switching only if there's a clear need.
### 4.2 Quantized Devstral Variant
**Not applicable for this project.** This is PhD research requiring full-precision BF16 for accurate attention pattern analysis. Quantization introduces numerical artifacts that would compromise research validity.
For reference, if quantization were acceptable:
- 4-bit GPTQ or AWQ quantization reduces VRAM to ~12-16GB
- Allows running on smaller GPU tiers (T4, L4)
- Trade-off: quality loss makes this unsuitable for research purposes
### 4.3 Additional Deployment Targets
Other optional deployment options:
- **Third HF Space** for specific use cases (e.g., research-only access)
- **Self-hosted Kubernetes** with auto-scaling
- **Modal/RunPod** for burst capacity
### 4.4 Entrypoint Consistency
The codebase has two service paths:
- **Spark backend**: `backend.model_service:app` on port 8000
- **HuggingFace wrapper**: `app:app` on port 7860
If adding new deployment targets, ensure they use consistent entrypoints and expose the same API surface.
---
## Rollback Procedures
If a deployment fails or causes issues, use these rollback procedures:
### HuggingFace Space Rollback
**Option A: Revert via GitHub**
1. Revert the problematic commit on `main` branch
2. Push the revert - GitHub Actions will redeploy the previous version
3. In HF Space settings, change `DEFAULT_MODEL` back if needed
**Option B: Manual Space revert**
1. Go to HuggingFace Space > Files > History
2. Find the last known good commit
3. Click "Revert to this version"
4. Update environment variables if needed
**Option C: Change model without redeploying**
1. In HF Space settings, change `DEFAULT_MODEL=codegen-350m`
2. Restart the Space (Settings > Restart)
3. Space will reload with CodeGen instead of Devstral
### DGX Spark Rollback
**Quick rollback (change model):**
```bash
# On Spark host
cd /srv/projects/visualisable-ai-backend
# Edit .env.spark to change DEFAULT_MODEL
vim .env.spark
# Change: DEFAULT_MODEL=codegen-350m
# Restart container
docker compose -f docker/compose.spark.yml --env-file .env.spark up -d
```
**Full rollback (previous code version):**
```bash
# On Spark host
cd /srv/projects/visualisable-ai-backend
# Find the last known good commit
git log --oneline -10
# Reset to that commit
git checkout <commit-hash>
# Rebuild and restart
docker compose -f docker/compose.spark.yml --env-file .env.spark up -d --build
```
**Rollback to previous Docker image (if tagged):**
```bash
# If you tagged the previous working image
docker compose -f docker/compose.spark.yml --env-file .env.spark down
docker run -d --gpus all -p 8000:8000 --env-file .env.spark visualisable-ai-backend:last-known-good
```
---
## Monitoring
Lightweight monitoring approach for v1:
### Health Checks
All backends expose `/health` (process alive) and `/ready` (model loaded):
```bash
# Quick status check
curl -s http://spark-c691.local:8000/health | jq
curl -s http://spark-c691.local:8000/ready | jq
curl -s http://spark-c691.local:8000/debug/device | jq
```
### Uptime Monitoring
For Spark (local network), use a simple cron job or uptime check:
```bash
# Add to crontab on a machine that can reach Spark
*/5 * * * * curl -sf http://spark-c691.local:8000/health > /dev/null || echo "Spark down" | mail -s "Alert: Spark unhealthy" you@example.com
```
For HuggingFace Spaces:
- Use HuggingFace's built-in Space status monitoring
- Or set up an external uptime monitor (UptimeRobot, Pingdom, etc.) to check the Space URL
### Frontend Status Indicator
In the app, show backend connection status based on `/health` and `/ready`:
- **Connected** (green): `/health` returns 200, `/ready` returns 200
- **Loading** (yellow): `/health` returns 200, `/ready` returns 503
- **Unreachable** (red): `/health` fails or times out
This gives users visibility into backend state without needing server-side monitoring.
---
## Summary: Files to Create/Modify
### Phase 0 (Secure GPU HF Space + Verify Basic Routing)
| File | Action |
|------|--------|
| `visualisable-ai/lib/backend-auth.server.ts` | CREATE (getBaseAuthHeaders, getHfAuthHeader, isHfSpace) |
| `visualisable-ai/lib/backend-router.ts` | MODIFY (remove secrets from getBackendHeaders) |
| **Vercel Environment** | ADD `HF_TOKEN` (server-side only) |
| **HuggingFace GPU Space** | CONFIGURE (set to Private, configure sleep timeout) |
### Phase 0.5 (Fix Critical API Routing + Prove GPU Routing)
| File | Action |
|------|--------|
| `visualisable-ai/lib/backend-fetch.ts` | CREATE (per-user backend fetch helper) |
| `visualisable-ai/app/api/research/attention/analyze/route.ts` | MODIFY (use backendFetch) |
### Phase 1 (Infrastructure)
| File | Action |
|------|--------|
| `Dockerfile` | CREATE |
| `docker/compose.spark.yml` | CREATE |
| `.env.spark.example` | CREATE |
| `.gitignore` | MODIFY (add .env.spark, runs/) |
| `backend/model_service.py` | MODIFY (ensure /health is fast, add /ready, add /debug/device) |
### Phase 2 (Devstral Backend Support)
| File | Action |
|------|--------|
| `backend/model_adapter.py` | MODIFY (add MistralAdapter) |
| `backend/model_config.py` | MODIFY (add devstral-small) |
| `backend/model_service.py` | MODIFY (fix layer classification, wire env vars, add /models and /models/current endpoints) |
### Phase 2b (Frontend Dynamic Handling)
| File | Action |
|------|--------|
| `components/research/VerticalPipeline.tsx` | MODIFY (dynamic layers, vocab) |
| `components/research/SpreadsheetGrid.tsx` | MODIFY (dynamic head_dim, if applicable) |
### Phase 2c (Frontend Routing + GPU HF Devstral)
| File | Action |
|------|--------|
| `visualisable-ai/lib/backend-router.ts` | MODIFY (add Spark backend option) |
| `visualisable-ai/app/admin/users/page.tsx` | MODIFY (add Spark toggle) |
| `visualisable-ai/app/api/proxy/[...path]/route.ts` | MODIFY (use backendProxy + runtime='nodejs') |
| `visualisable-ai/app/api/backend/[...path]/route.ts` | MODIFY (use backendProxy + runtime='nodejs') |
| `visualisable-ai/app/api/demos/route.ts` | MODIFY (use backendFetch) |
| `visualisable-ai/app/api/demos/run/route.ts` | MODIFY (use backendFetch) |
| `visualisable-ai/app/api/vocabulary/*.ts` | MODIFY (use backendFetch) |
| `visualisable-ai/app/api/token/metadata/route.ts` | MODIFY (use backendFetch) |
| *(Note: `/api/research/attention/analyze` already updated in Phase 0.5)* | |
| `visualisable-ai/.env.local` | MODIFY (add NEXT_PUBLIC_SPARK_BACKEND_URL, NEXT_PUBLIC_MODE) |
| `visualisable-ai/.env.example` | MODIFY (document env vars, warn about NEXT_PUBLIC_MODE) |
| `visualisable-ai/components/TierIndicator.tsx` | MODIFY (optional: add Spark indicator) |
| **GPU HF Space** | CONFIGURE (DEFAULT_MODEL=devstral-small, upgrade to L40S/A100) |
### Phase 3 (Spark Deployment)
| File | Action |
|------|--------|
| `.env.spark` | MODIFY (change DEFAULT_MODEL to devstral-small, TORCH_DTYPE=bf16) |
---
## Quick Checklist
Before marking each phase complete, verify:
### Phase 0 (Secure GPU HF Space)
- [ ] GPU HF Space set to Private
- [ ] `HF_TOKEN` added to Vercel (server-side only, no `NEXT_PUBLIC_`)
- [ ] `lib/backend-auth.server.ts` created with `getBaseAuthHeaders()`, `getHfAuthHeader()`, `isHfSpace()`
- [ ] `getBackendHeaders()` in backend-router.ts cleaned up (no secrets)
- [ ] Sleep timeout configured (5 minutes)
- [ ] Direct unauthenticated request to GPU Space returns 401
### Phase 0.5 (Fix Critical Routing)
- [ ] `lib/backend-fetch.ts` created with `backendFetch()` (minimal helper)
- [ ] At least one critical endpoint uses `backendFetch`
- [ ] GPU-enabled user's analyze request reaches GPU HF Space (verified)
- [ ] Free tier user's analyze request still goes to CPU HF Space
- [ ] (Note: `backendProxy()` added later in Phase 2c for proxy routes)
### Phase 1
- [ ] `/health` returns fast (< 100ms) even while model is loading
- [ ] `/ready` endpoint exists and returns model load status
- [ ] `.env.spark` is gitignored
- [ ] Multi-branch guidance documented (ports + compose -p)
### Phase 2
- [ ] MistralAdapter handles layer access correctly
- [ ] Layer classification uses percentages, not hardcoded indices
- [ ] Env vars (TORCH_DTYPE, MAX_CONTEXT, BATCH_SIZE) are wired into loader
- [ ] `requires_gpu: True` for Devstral to guide users to Spark
- [ ] `/models` endpoint returns list of available models
- [ ] `/models/current` endpoint returns currently loaded model info
### Phase 2b
- [ ] Frontend stage boundaries are percentage-based
- [ ] Vocab size is dynamic, not hardcoded 51,200
- [ ] head_dim calculated from hidden_size/num_heads (if used)
### Phase 2c
- [ ] **All API routes use per-user routing** (no hardcoded BACKEND_URL)
- [ ] **All proxy routes** have `export const runtime = 'nodejs'`
- [ ] GPU toggle correctly routes to GPU HuggingFace Space
- [ ] GPU HF Space has `DEFAULT_MODEL=devstral-small` and sufficient VRAM
- [ ] `/models/current` endpoint exists and returns current model info
- [ ] **GPU Devstral proof:** `/models/current` returns `id=devstral-small, device=cuda, dtype=bf16`
- [ ] GPU-enabled users automatically get Devstral in production
- [ ] Spark backend URL configurable via environment variable (local mode only)
- [ ] `NEXT_PUBLIC_MODE` only defined in `.env.local`, never in Vercel
- [ ] Admin UI has Spark toggle (mutually exclusive with Remote, local mode only)
- [ ] Model selector shows available models based on connected backend
### Phase 3
- [ ] TORCH_DTYPE=bf16 in .env.spark
- [ ] Model loads on GPU (check logs)
- [ ] Inference is GPU-accelerated (fast)
- [ ] Frontend renders 40 layers correctly
---
## Current Status
- [x] **Phase 0**: Secure GPU HF Space + verify basic routing ✅ COMPLETE
- [x] **Phase 0.5**: Fix critical API route routing (prove GPU routing works) ✅ COMPLETE
- [ ] **Phase 1**: Deploy CodeGen to DGX Spark ⏸️ PAUSED (see blocker below)
- [x] **Phase 2**: Add Devstral backend support ✅ COMPLETE
- MistralAdapter added for Mistral/Devstral architecture
- devstral-small config with 40 layers, GQA (32 Q heads, 8 KV heads)
- Model-specific dtype (recommended_dtype field: codegen→fp16, devstral→bf16)
- Percentage-based layer classification (works for any layer count)
- /models and /models/current endpoints added
- Environment variable support (DEFAULT_MODEL, TORCH_DTYPE, MAX_CONTEXT, BATCH_SIZE)
- [x] **Phase 2b**: Frontend dynamic layer handling ✅ COMPLETE
- Percentage-based stage boundaries in VerticalPipeline
- Dynamic vocab size from modelInfo
- Dynamic head_dim derived from actual matrix data
- Removed hardcoded "64 dimensions" in tutorial
- [x] **Phase 2c**: API route conversion + GPU HF Space ✅ COMPLETE
- All 8 API routes converted to use backendFetch helper
- Server-side auth with HF token for private Spaces
- Per-user backend routing working
- GPU HF Space configured: A100 (80GB), DEFAULT_MODEL=devstral-small
- ⏸️ Spark toggle deferred (no benefit until PyTorch supports GB10)
- [ ] **Phase 3**: Deploy Devstral to DGX Spark ⏸️ BLOCKED (PyTorch sm_121 support)
- [ ] **Phase 4**: Future enhancements (optional)
---
## Blocker: DGX Spark GB10 GPU Not Yet Supported by PyTorch
**Date:** December 2024
**Status:** ⏸️ Phase 1 paused pending PyTorch update
### The Issue
The DGX Spark uses an NVIDIA GB10 GPU (Grace Blackwell architecture) with compute capability **sm_121**. Current PyTorch releases (including NGC containers up to 24.08) do not include pre-built CUDA kernels for sm_121.
**Error observed:**
```
RuntimeError: CUDA error: no kernel image is available for execution on the device
CUDA kernel errors might be asynchronously reported at some other API call
```
**Hardware details:**
- DGX Spark hostname: `spark-c691.local`
- GPU: NVIDIA GB10 (sm_121 compute capability)
- CUDA driver: 13.0
- Architecture: ARM64 (aarch64)
### What We Tried
1. **NGC PyTorch container 24.08-py3** - Does not include sm_121 kernels
2. **NGC PyTorch container 24.11-py3** - Python 3.12 compatibility issues with dependencies
3. **Standard PyTorch images** - No ARM64 + CUDA 13.0 support
4. **CPU fallback** - Works but defeats the purpose of using Spark
### What We Learned
From the [PyTorch forums](https://discuss.pytorch.org/t/nvidia-dgx-spark-support/223677/16):
1. **sm_121 is binary compatible with sm_120** - The warning/error is overly cautious
2. **A PR exists** to add sm_121 support but missed PyTorch 2.9.0 release
3. **Workaround exists** - Building PyTorch from source with sm_121 support works, but requires recompiling PyTorch, torchvision, and triton
### Why We're Pausing (Not Workaround)
Running CodeGen on CPU on the Spark provides no benefit over:
- Mac Studio (512GB RAM) for local development
- HuggingFace Spaces (CPU and GPU options available)
The Spark deployment only makes sense when we can leverage the GB10 GPU. Building PyTorch from source is complex and fragile for a temporary workaround.
### What's Ready on Spark
The following infrastructure is in place and ready to test once GPU support lands:
- [x] Docker infrastructure: `docker/compose.spark.yml`
- [x] Dockerfile: `docker/Dockerfile.spark` (using NGC container)
- [x] Environment template: `.env.spark.example`
- [x] SSH access configured with key-based auth
- [x] Git clone at `/srv/visualisable/backend`
- [x] Model cache directory: `/srv/models-cache/huggingface`
- [x] Backend code has DEVICE env var override (for CPU fallback if needed)
- [x] `/health`, `/ready`, `/debug/device` endpoints added
### Restart Instructions
When PyTorch officially supports sm_121 (expected in PyTorch 2.9.x patch or 2.10):
1. **Check for updated NGC container:**
```bash
# Look for NGC PyTorch containers with sm_121 support
# https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch
```
2. **Update Dockerfile.spark:**
```dockerfile
# Update to NGC container version with sm_121 support
FROM nvcr.io/nvidia/pytorch:XX.XX-py3
```
3. **On Spark, pull and rebuild:**
```bash
ssh dgxspark@spark-c691.local
cd /srv/visualisable/backend
git pull
# Remove DEVICE=cpu from .env.spark (or comment it out)
vim .env.spark
# Rebuild with new NGC container
docker compose -f docker/compose.spark.yml --env-file .env.spark up -d --build
```
4. **Verify GPU is working:**
```bash
# Should show cuda_available: true, model_device: cuda:0
curl -s http://spark-c691.local:8000/debug/device | python -m json.tool
# Test inference
curl -X POST http://spark-c691.local:8000/analyze/research/attention \
-H "Content-Type: application/json" \
-d '{"prompt": "def hello():", "max_tokens": 5}'
```
5. **Continue with Phase 1 validation criteria**
### Monitoring PyTorch Progress
- PyTorch GitHub: Watch for sm_121 PRs
- NGC Container releases: https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch
- PyTorch forums: https://discuss.pytorch.org/t/nvidia-dgx-spark-support/223677
### Pre-Devstral Tag
Before making Phase 2 changes, both repos were tagged: `pre-devstral-phase2-v1`
To restore to this state if needed:
```bash
git checkout pre-devstral-phase2-v1
```
### Phase 2 Completion Tags
After Phase 2/2b/2c completion (December 2024):
- Backend: Contains MistralAdapter, devstral-small config, /models endpoints
- Frontend: Contains dynamic layer handling, backendFetch conversion
|