zuppif commited on
Commit
e4998da
·
verified ·
1 Parent(s): 2a4dafa

Upload folder using huggingface_hub

Browse files
.gitignore ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ .eggs/
2
+ dist/
3
+ *.pyc
4
+ __pycache__/
5
+ *.py[cod]
6
+ *$py.class
7
+ __tmp/*
8
+ *.pyi
9
+ .mypycache
10
+ .ruff_cache
11
+ node_modules
12
+ backend/**/templates/
README.md CHANGED
@@ -1,12 +1,344 @@
 
1
  ---
2
- title: Gradio Aicontext
3
- emoji: 👀
4
- colorFrom: pink
5
- colorTo: green
6
  sdk: gradio
7
- sdk_version: 5.49.1
8
  app_file: app.py
9
  pinned: false
10
  ---
11
 
12
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
  ---
3
+ title: Gradio AI Context
 
 
 
4
  sdk: gradio
5
+ sdk_version: 5.49.0
6
  app_file: app.py
7
  pinned: false
8
  ---
9
 
10
+ # `gradio_aicontext`
11
+ <img alt="Static Badge" src="https://img.shields.io/badge/version%20-%200.0.1%20-%20orange">
12
+
13
+ Custom gradio component to visualize your ai context by colours and tokens' size.
14
+
15
+ ## Installation
16
+
17
+ ```bash
18
+ pip install gradio_aicontext
19
+ ```
20
+
21
+ ## Usage
22
+
23
+ ```python
24
+ import gradio as gr
25
+ from gradio_aicontext import AIContext
26
+ import json
27
+ import tiktoken
28
+
29
+
30
+ def count_tokens(message: dict) -> int:
31
+ encoding = tiktoken.encoding_for_model("gpt-4")
32
+ return encoding.encode(json.dumps([message])).__len__()
33
+
34
+
35
+ initial_messages = [
36
+ {
37
+ "role": "system",
38
+ "content": "You are a helpful AI assistant skilled in Python programming and data analysis."
39
+ },
40
+ {
41
+ "role": "user",
42
+ "content": "Can you help me write a function to calculate the factorial of a number?"
43
+ },
44
+ {
45
+ "role": "assistant",
46
+ "content": "I'll help you create a factorial function. Here are two approaches:\n\n```python\n# Recursive approach\ndef factorial_recursive(n):\n if n <= 1:\n return 1\n return n * factorial_recursive(n - 1)\n\n# Iterative approach \ndef factorial_iterative(n):\n result = 1\n for i in range(1, n + 1):\n result *= i\n return result\n\n# Example usage\nprint(factorial_recursive(5)) # 120\nprint(factorial_iterative(5)) # 120\n```\n\nBoth functions work well, but the iterative version is more memory efficient for large numbers."
47
+ },
48
+ {
49
+ "role": "user",
50
+ "content": "What about using math.factorial()?"
51
+ },
52
+ {
53
+ "role": "assistant",
54
+ "content": "Great point! Python's built-in `math.factorial()` is the most efficient option:\n\n```python\nimport math\n\nresult = math.factorial(5) # 120\n```\n\nIt's optimized, handles edge cases, and is the recommended approach for production code. Use custom implementations mainly for learning or when you need specific behavior."
55
+ },
56
+ {
57
+ "role": "user",
58
+ "content": "What's the weather like in New York today?"
59
+ },
60
+ {
61
+ "type": "function_call",
62
+ "name": "get_weather",
63
+ "call_id": "call_weather_123",
64
+ "arguments": "{\"location\": \"New York, NY\", \"units\": \"fahrenheit\"}"
65
+ },
66
+ {
67
+ "type": "function_call_output",
68
+ "call_id": "call_weather_123",
69
+ "output": "{\"location\": \"New York, NY\", \"temperature\": 72, \"condition\": \"Partly Cloudy\", \"humidity\": 65, \"wind_speed\": 8, \"visibility\": 10, \"uv_index\": 6, \"feels_like\": 75}"
70
+ },
71
+ {
72
+ "role": "assistant",
73
+ "content": "The weather in New York today is quite pleasant! Here are the details:\n\n🌤️ **Partly Cloudy** - 72°F (feels like 75°F)\n💨 Wind: 8 mph\n💧 Humidity: 65%\n👁️ Visibility: 10 miles\n☀️ UV Index: 6 (moderate)\n\nIt's a nice day to be outside!"
74
+ },
75
+ {
76
+ "role": "user",
77
+ "content": "Can you also check the forecast for tomorrow and send me an email reminder?"
78
+ },
79
+ {
80
+ "type": "function_call",
81
+ "name": "get_weather_forecast",
82
+ "call_id": "call_forecast_456",
83
+ "arguments": "{\"location\": \"New York, NY\", \"days\": 1}"
84
+ },
85
+ {
86
+ "type": "function_call_output",
87
+ "call_id": "call_forecast_456",
88
+ "output": "{\"location\": \"New York, NY\", \"forecast\": [{\"date\": \"2024-10-17\", \"high\": 68, \"low\": 58, \"condition\": \"Light Rain\", \"precipitation_chance\": 80, \"wind_speed\": 12}]}"
89
+ },
90
+ {
91
+ "type": "function_call",
92
+ "name": "send_email",
93
+ "call_id": "call_email_789",
94
+ "arguments": "{\"to\": \"user@example.com\", \"subject\": \"Weather Reminder: Bring an Umbrella Tomorrow!\", \"body\": \"Tomorrow's forecast for New York: Light rain expected with 80% chance of precipitation. High: 68°F, Low: 58°F. Don't forget your umbrella!\"}"
95
+ },
96
+ {
97
+ "type": "function_call_output",
98
+ "call_id": "call_email_789",
99
+ "output": "{\"status\": \"sent\", \"message_id\": \"msg_12345\", \"delivered_at\": \"2024-10-16T20:45:30Z\"}"
100
+ }
101
+ ]
102
+
103
+ with gr.Blocks(title="AI Context Visualization Demo") as demo:
104
+ gr.Markdown("# AI Context Component Demo")
105
+ gr.Markdown("Shows token visualization for OpenAI-style conversation messages")
106
+
107
+ with gr.Row():
108
+ with gr.Column(scale=2):
109
+ json_editor = gr.JSON(
110
+ value=initial_messages,
111
+ label="Raw Messages (Editable)",
112
+ show_label=True
113
+ )
114
+
115
+ with gr.Column(scale=1):
116
+ context_viz = AIContext(
117
+ value=initial_messages,
118
+ count_tokens_fn=count_tokens
119
+ )
120
+
121
+ json_editor.change(
122
+ fn=lambda x: x,
123
+ inputs=[json_editor],
124
+ outputs=[context_viz]
125
+ )
126
+
127
+
128
+ if __name__ == "__main__":
129
+ demo.launch()
130
+
131
+ ```
132
+
133
+ ## `AIContext`
134
+
135
+ ### Initialization
136
+
137
+ <table>
138
+ <thead>
139
+ <tr>
140
+ <th align="left">name</th>
141
+ <th align="left" style="width: 25%;">type</th>
142
+ <th align="left">default</th>
143
+ <th align="left">description</th>
144
+ </tr>
145
+ </thead>
146
+ <tbody>
147
+ <tr>
148
+ <td align="left"><code>value</code></td>
149
+ <td align="left" style="width: 25%;">
150
+
151
+ ```python
152
+ list | dict | Callable | None
153
+ ```
154
+
155
+ </td>
156
+ <td align="left"><code>None</code></td>
157
+ <td align="left">list of messages or dict containing messages. If a function is provided, the function will be called each time the app loads to set the initial value of this component.</td>
158
+ </tr>
159
+
160
+ <tr>
161
+ <td align="left"><code>count_tokens_fn</code></td>
162
+ <td align="left" style="width: 25%;">
163
+
164
+ ```python
165
+ Callable[[Any], int] | None
166
+ ```
167
+
168
+ </td>
169
+ <td align="left"><code>None</code></td>
170
+ <td align="left">function to count tokens in a message. If None, uses character count / 4 heuristic.</td>
171
+ </tr>
172
+
173
+ <tr>
174
+ <td align="left"><code>label</code></td>
175
+ <td align="left" style="width: 25%;">
176
+
177
+ ```python
178
+ str | I18nData | None
179
+ ```
180
+
181
+ </td>
182
+ <td align="left"><code>"AI Context"</code></td>
183
+ <td align="left">the label for this component, displayed above the component if `show_label` is `True`.</td>
184
+ </tr>
185
+
186
+ <tr>
187
+ <td align="left"><code>every</code></td>
188
+ <td align="left" style="width: 25%;">
189
+
190
+ ```python
191
+ Timer | float | None
192
+ ```
193
+
194
+ </td>
195
+ <td align="left"><code>None</code></td>
196
+ <td align="left">Continously calls `value` to recalculate it if `value` is a function (has no effect otherwise).</td>
197
+ </tr>
198
+
199
+ <tr>
200
+ <td align="left"><code>inputs</code></td>
201
+ <td align="left" style="width: 25%;">
202
+
203
+ ```python
204
+ Component | Sequence[Component] | set[Component] | None
205
+ ```
206
+
207
+ </td>
208
+ <td align="left"><code>None</code></td>
209
+ <td align="left">Components that are used as inputs to calculate `value` if `value` is a function (has no effect otherwise).</td>
210
+ </tr>
211
+
212
+ <tr>
213
+ <td align="left"><code>scale</code></td>
214
+ <td align="left" style="width: 25%;">
215
+
216
+ ```python
217
+ int | None
218
+ ```
219
+
220
+ </td>
221
+ <td align="left"><code>None</code></td>
222
+ <td align="left">relative size compared to adjacent Components.</td>
223
+ </tr>
224
+
225
+ <tr>
226
+ <td align="left"><code>min_width</code></td>
227
+ <td align="left" style="width: 25%;">
228
+
229
+ ```python
230
+ int
231
+ ```
232
+
233
+ </td>
234
+ <td align="left"><code>160</code></td>
235
+ <td align="left">minimum pixel width.</td>
236
+ </tr>
237
+
238
+ <tr>
239
+ <td align="left"><code>interactive</code></td>
240
+ <td align="left" style="width: 25%;">
241
+
242
+ ```python
243
+ bool | None
244
+ ```
245
+
246
+ </td>
247
+ <td align="left"><code>False</code></td>
248
+ <td align="left">if True, will be rendered as interactive; if False, will be read-only.</td>
249
+ </tr>
250
+
251
+ <tr>
252
+ <td align="left"><code>visible</code></td>
253
+ <td align="left" style="width: 25%;">
254
+
255
+ ```python
256
+ bool | Literal["hidden"]
257
+ ```
258
+
259
+ </td>
260
+ <td align="left"><code>True</code></td>
261
+ <td align="left">If False, component will be hidden.</td>
262
+ </tr>
263
+
264
+ <tr>
265
+ <td align="left"><code>elem_id</code></td>
266
+ <td align="left" style="width: 25%;">
267
+
268
+ ```python
269
+ str | None
270
+ ```
271
+
272
+ </td>
273
+ <td align="left"><code>None</code></td>
274
+ <td align="left">An optional string that is assigned as the id of this component in the HTML DOM.</td>
275
+ </tr>
276
+
277
+ <tr>
278
+ <td align="left"><code>elem_classes</code></td>
279
+ <td align="left" style="width: 25%;">
280
+
281
+ ```python
282
+ list[str] | str | None
283
+ ```
284
+
285
+ </td>
286
+ <td align="left"><code>None</code></td>
287
+ <td align="left">An optional list of strings that are assigned as the classes of this component in the HTML DOM.</td>
288
+ </tr>
289
+
290
+ <tr>
291
+ <td align="left"><code>render</code></td>
292
+ <td align="left" style="width: 25%;">
293
+
294
+ ```python
295
+ bool
296
+ ```
297
+
298
+ </td>
299
+ <td align="left"><code>True</code></td>
300
+ <td align="left">If False, component will not render be rendered in the Blocks context.</td>
301
+ </tr>
302
+
303
+ <tr>
304
+ <td align="left"><code>key</code></td>
305
+ <td align="left" style="width: 25%;">
306
+
307
+ ```python
308
+ int | str | tuple[int | str, ...] | None
309
+ ```
310
+
311
+ </td>
312
+ <td align="left"><code>None</code></td>
313
+ <td align="left">in a gr.render, Components with the same key across re-renders are treated as the same component.</td>
314
+ </tr>
315
+ </tbody></table>
316
+
317
+
318
+ ### Events
319
+
320
+ | name | description |
321
+ |:-----|:------------|
322
+ | `change` | Triggered when the value of the AIContext changes either because of user input (e.g. a user types in a textbox) OR because of a function update (e.g. an image receives a value from the output of an event trigger). See `.input()` for a listener that is only triggered by user input. |
323
+
324
+
325
+
326
+ ### User function
327
+
328
+ The impact on the users predict function varies depending on whether the component is used as an input or output for an event (or both).
329
+
330
+ - When used as an Input, the component only impacts the input signature of the user function.
331
+ - When used as an output, the component only impacts the return signature of the user function.
332
+
333
+ The code snippet below is accurate in cases where the component is used as both an input and an output.
334
+
335
+ - **As output:** Is passed, passes the data through unchanged.
336
+ - **As input:** Should return, messages list or dict containing messages.
337
+
338
+ ```python
339
+ def predict(
340
+ value: typing.Any
341
+ ) -> typing.Any:
342
+ return value
343
+ ```
344
+
__init__.py ADDED
File without changes
app.py ADDED
@@ -0,0 +1,106 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import gradio as gr
2
+ from gradio_aicontext import AIContext
3
+ import json
4
+ import tiktoken
5
+
6
+
7
+ def count_tokens(message: dict) -> int:
8
+ encoding = tiktoken.encoding_for_model("gpt-4")
9
+ return encoding.encode(json.dumps([message])).__len__()
10
+
11
+
12
+ initial_messages = [
13
+ {
14
+ "role": "system",
15
+ "content": "You are a helpful AI assistant skilled in Python programming and data analysis."
16
+ },
17
+ {
18
+ "role": "user",
19
+ "content": "Can you help me write a function to calculate the factorial of a number?"
20
+ },
21
+ {
22
+ "role": "assistant",
23
+ "content": "I'll help you create a factorial function. Here are two approaches:\n\n```python\n# Recursive approach\ndef factorial_recursive(n):\n if n <= 1:\n return 1\n return n * factorial_recursive(n - 1)\n\n# Iterative approach \ndef factorial_iterative(n):\n result = 1\n for i in range(1, n + 1):\n result *= i\n return result\n\n# Example usage\nprint(factorial_recursive(5)) # 120\nprint(factorial_iterative(5)) # 120\n```\n\nBoth functions work well, but the iterative version is more memory efficient for large numbers."
24
+ },
25
+ {
26
+ "role": "user",
27
+ "content": "What about using math.factorial()?"
28
+ },
29
+ {
30
+ "role": "assistant",
31
+ "content": "Great point! Python's built-in `math.factorial()` is the most efficient option:\n\n```python\nimport math\n\nresult = math.factorial(5) # 120\n```\n\nIt's optimized, handles edge cases, and is the recommended approach for production code. Use custom implementations mainly for learning or when you need specific behavior."
32
+ },
33
+ {
34
+ "role": "user",
35
+ "content": "What's the weather like in New York today?"
36
+ },
37
+ {
38
+ "type": "function_call",
39
+ "name": "get_weather",
40
+ "call_id": "call_weather_123",
41
+ "arguments": "{\"location\": \"New York, NY\", \"units\": \"fahrenheit\"}"
42
+ },
43
+ {
44
+ "type": "function_call_output",
45
+ "call_id": "call_weather_123",
46
+ "output": "{\"location\": \"New York, NY\", \"temperature\": 72, \"condition\": \"Partly Cloudy\", \"humidity\": 65, \"wind_speed\": 8, \"visibility\": 10, \"uv_index\": 6, \"feels_like\": 75}"
47
+ },
48
+ {
49
+ "role": "assistant",
50
+ "content": "The weather in New York today is quite pleasant! Here are the details:\n\n🌤️ **Partly Cloudy** - 72°F (feels like 75°F)\n💨 Wind: 8 mph\n💧 Humidity: 65%\n👁️ Visibility: 10 miles\n☀️ UV Index: 6 (moderate)\n\nIt's a nice day to be outside!"
51
+ },
52
+ {
53
+ "role": "user",
54
+ "content": "Can you also check the forecast for tomorrow and send me an email reminder?"
55
+ },
56
+ {
57
+ "type": "function_call",
58
+ "name": "get_weather_forecast",
59
+ "call_id": "call_forecast_456",
60
+ "arguments": "{\"location\": \"New York, NY\", \"days\": 1}"
61
+ },
62
+ {
63
+ "type": "function_call_output",
64
+ "call_id": "call_forecast_456",
65
+ "output": "{\"location\": \"New York, NY\", \"forecast\": [{\"date\": \"2024-10-17\", \"high\": 68, \"low\": 58, \"condition\": \"Light Rain\", \"precipitation_chance\": 80, \"wind_speed\": 12}]}"
66
+ },
67
+ {
68
+ "type": "function_call",
69
+ "name": "send_email",
70
+ "call_id": "call_email_789",
71
+ "arguments": "{\"to\": \"user@example.com\", \"subject\": \"Weather Reminder: Bring an Umbrella Tomorrow!\", \"body\": \"Tomorrow's forecast for New York: Light rain expected with 80% chance of precipitation. High: 68°F, Low: 58°F. Don't forget your umbrella!\"}"
72
+ },
73
+ {
74
+ "type": "function_call_output",
75
+ "call_id": "call_email_789",
76
+ "output": "{\"status\": \"sent\", \"message_id\": \"msg_12345\", \"delivered_at\": \"2024-10-16T20:45:30Z\"}"
77
+ }
78
+ ]
79
+
80
+ with gr.Blocks(title="AI Context Visualization Demo") as demo:
81
+ gr.Markdown("# AI Context Component Demo")
82
+ gr.Markdown("Shows token visualization for OpenAI-style conversation messages")
83
+
84
+ with gr.Row():
85
+ with gr.Column(scale=2):
86
+ json_editor = gr.JSON(
87
+ value=initial_messages,
88
+ label="Raw Messages (Editable)",
89
+ show_label=True
90
+ )
91
+
92
+ with gr.Column(scale=1):
93
+ context_viz = AIContext(
94
+ value=initial_messages,
95
+ count_tokens_fn=count_tokens
96
+ )
97
+
98
+ json_editor.change(
99
+ fn=lambda x: x,
100
+ inputs=[json_editor],
101
+ outputs=[context_viz]
102
+ )
103
+
104
+
105
+ if __name__ == "__main__":
106
+ demo.launch()
css.css ADDED
@@ -0,0 +1,157 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ html {
2
+ font-family: Inter;
3
+ font-size: 16px;
4
+ font-weight: 400;
5
+ line-height: 1.5;
6
+ -webkit-text-size-adjust: 100%;
7
+ background: #fff;
8
+ color: #323232;
9
+ -webkit-font-smoothing: antialiased;
10
+ -moz-osx-font-smoothing: grayscale;
11
+ text-rendering: optimizeLegibility;
12
+ }
13
+
14
+ :root {
15
+ --space: 1;
16
+ --vspace: calc(var(--space) * 1rem);
17
+ --vspace-0: calc(3 * var(--space) * 1rem);
18
+ --vspace-1: calc(2 * var(--space) * 1rem);
19
+ --vspace-2: calc(1.5 * var(--space) * 1rem);
20
+ --vspace-3: calc(0.5 * var(--space) * 1rem);
21
+ }
22
+
23
+ .app {
24
+ max-width: 748px !important;
25
+ }
26
+
27
+ .prose p {
28
+ margin: var(--vspace) 0;
29
+ line-height: var(--vspace * 2);
30
+ font-size: 1rem;
31
+ }
32
+
33
+ code {
34
+ font-family: "Inconsolata", sans-serif;
35
+ font-size: 16px;
36
+ }
37
+
38
+ h1,
39
+ h1 code {
40
+ font-weight: 400;
41
+ line-height: calc(2.5 / var(--space) * var(--vspace));
42
+ }
43
+
44
+ h1 code {
45
+ background: none;
46
+ border: none;
47
+ letter-spacing: 0.05em;
48
+ padding-bottom: 5px;
49
+ position: relative;
50
+ padding: 0;
51
+ }
52
+
53
+ h2 {
54
+ margin: var(--vspace-1) 0 var(--vspace-2) 0;
55
+ line-height: 1em;
56
+ }
57
+
58
+ h3,
59
+ h3 code {
60
+ margin: var(--vspace-1) 0 var(--vspace-2) 0;
61
+ line-height: 1em;
62
+ }
63
+
64
+ h4,
65
+ h5,
66
+ h6 {
67
+ margin: var(--vspace-3) 0 var(--vspace-3) 0;
68
+ line-height: var(--vspace);
69
+ }
70
+
71
+ .bigtitle,
72
+ h1,
73
+ h1 code {
74
+ font-size: calc(8px * 4.5);
75
+ word-break: break-word;
76
+ }
77
+
78
+ .title,
79
+ h2,
80
+ h2 code {
81
+ font-size: calc(8px * 3.375);
82
+ font-weight: lighter;
83
+ word-break: break-word;
84
+ border: none;
85
+ background: none;
86
+ }
87
+
88
+ .subheading1,
89
+ h3,
90
+ h3 code {
91
+ font-size: calc(8px * 1.8);
92
+ font-weight: 600;
93
+ border: none;
94
+ background: none;
95
+ letter-spacing: 0.1em;
96
+ text-transform: uppercase;
97
+ }
98
+
99
+ h2 code {
100
+ padding: 0;
101
+ position: relative;
102
+ letter-spacing: 0.05em;
103
+ }
104
+
105
+ blockquote {
106
+ font-size: calc(8px * 1.1667);
107
+ font-style: italic;
108
+ line-height: calc(1.1667 * var(--vspace));
109
+ margin: var(--vspace-2) var(--vspace-2);
110
+ }
111
+
112
+ .subheading2,
113
+ h4 {
114
+ font-size: calc(8px * 1.4292);
115
+ text-transform: uppercase;
116
+ font-weight: 600;
117
+ }
118
+
119
+ .subheading3,
120
+ h5 {
121
+ font-size: calc(8px * 1.2917);
122
+ line-height: calc(1.2917 * var(--vspace));
123
+
124
+ font-weight: lighter;
125
+ text-transform: uppercase;
126
+ letter-spacing: 0.15em;
127
+ }
128
+
129
+ h6 {
130
+ font-size: calc(8px * 1.1667);
131
+ font-size: 1.1667em;
132
+ font-weight: normal;
133
+ font-style: italic;
134
+ font-family: "le-monde-livre-classic-byol", serif !important;
135
+ letter-spacing: 0px !important;
136
+ }
137
+
138
+ #start .md > *:first-child {
139
+ margin-top: 0;
140
+ }
141
+
142
+ h2 + h3 {
143
+ margin-top: 0;
144
+ }
145
+
146
+ .md hr {
147
+ border: none;
148
+ border-top: 1px solid var(--block-border-color);
149
+ margin: var(--vspace-2) 0 var(--vspace-2) 0;
150
+ }
151
+ .prose ul {
152
+ margin: var(--vspace-2) 0 var(--vspace-1) 0;
153
+ }
154
+
155
+ .gap {
156
+ gap: 0;
157
+ }
space.py ADDED
@@ -0,0 +1,228 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ import gradio as gr
3
+ from app import demo as app
4
+ import os
5
+
6
+ _docs = {'AIContext': {'description': 'Creates an AI context visualization component showing message stack with token counts.', 'members': {'__init__': {'value': {'type': 'list | dict | Callable | None', 'default': 'None', 'description': 'list of messages or dict containing messages. If a function is provided, the function will be called each time the app loads to set the initial value of this component.'}, 'count_tokens_fn': {'type': 'Callable[[Any], int] | None', 'default': 'None', 'description': 'function to count tokens in a message. If None, uses character count / 4 heuristic.'}, 'label': {'type': 'str | I18nData | None', 'default': '"AI Context"', 'description': 'the label for this component, displayed above the component if `show_label` is `True`.'}, 'every': {'type': 'Timer | float | None', 'default': 'None', 'description': 'Continously calls `value` to recalculate it if `value` is a function (has no effect otherwise).'}, 'inputs': {'type': 'Component | Sequence[Component] | set[Component] | None', 'default': 'None', 'description': 'Components that are used as inputs to calculate `value` if `value` is a function (has no effect otherwise).'}, 'scale': {'type': 'int | None', 'default': 'None', 'description': 'relative size compared to adjacent Components.'}, 'min_width': {'type': 'int', 'default': '160', 'description': 'minimum pixel width.'}, 'interactive': {'type': 'bool | None', 'default': 'False', 'description': 'if True, will be rendered as interactive; if False, will be read-only.'}, 'visible': {'type': 'bool | Literal["hidden"]', 'default': 'True', 'description': 'If False, component will be hidden.'}, 'elem_id': {'type': 'str | None', 'default': 'None', 'description': 'An optional string that is assigned as the id of this component in the HTML DOM.'}, 'elem_classes': {'type': 'list[str] | str | None', 'default': 'None', 'description': 'An optional list of strings that are assigned as the classes of this component in the HTML DOM.'}, 'render': {'type': 'bool', 'default': 'True', 'description': 'If False, component will not render be rendered in the Blocks context.'}, 'key': {'type': 'int | str | tuple[int | str, ...] | None', 'default': 'None', 'description': 'in a gr.render, Components with the same key across re-renders are treated as the same component.'}}, 'postprocess': {'value': {'type': 'typing.Any', 'description': 'messages list or dict containing messages.'}}, 'preprocess': {'return': {'type': 'typing.Any', 'description': 'Passes the data through unchanged.'}, 'value': None}}, 'events': {'change': {'type': None, 'default': None, 'description': 'Triggered when the value of the AIContext changes either because of user input (e.g. a user types in a textbox) OR because of a function update (e.g. an image receives a value from the output of an event trigger). See `.input()` for a listener that is only triggered by user input.'}}}, '__meta__': {'additional_interfaces': {}, 'user_fn_refs': {'AIContext': []}}}
7
+
8
+ abs_path = os.path.join(os.path.dirname(__file__), "css.css")
9
+
10
+ with gr.Blocks(
11
+ css=abs_path,
12
+ theme=gr.themes.Default(
13
+ font_mono=[
14
+ gr.themes.GoogleFont("Inconsolata"),
15
+ "monospace",
16
+ ],
17
+ ),
18
+ ) as demo:
19
+ gr.Markdown(
20
+ """
21
+ # `gradio_aicontext`
22
+
23
+ <div style="display: flex; gap: 7px;">
24
+ <img alt="Static Badge" src="https://img.shields.io/badge/version%20-%200.0.1%20-%20orange">
25
+ </div>
26
+
27
+ Custom gradio component to visualize your ai context by colours and tokens' size.
28
+ """, elem_classes=["md-custom"], header_links=True)
29
+ app.render()
30
+ gr.Markdown(
31
+ """
32
+ ## Installation
33
+
34
+ ```bash
35
+ pip install gradio_aicontext
36
+ ```
37
+
38
+ ## Usage
39
+
40
+ ```python
41
+ import gradio as gr
42
+ from gradio_aicontext import AIContext
43
+ import json
44
+ import tiktoken
45
+
46
+
47
+ def count_tokens(message: dict) -> int:
48
+ encoding = tiktoken.encoding_for_model("gpt-4")
49
+ return encoding.encode(json.dumps([message])).__len__()
50
+
51
+
52
+ initial_messages = [
53
+ {
54
+ "role": "system",
55
+ "content": "You are a helpful AI assistant skilled in Python programming and data analysis."
56
+ },
57
+ {
58
+ "role": "user",
59
+ "content": "Can you help me write a function to calculate the factorial of a number?"
60
+ },
61
+ {
62
+ "role": "assistant",
63
+ "content": "I'll help you create a factorial function. Here are two approaches:\n\n```python\n# Recursive approach\ndef factorial_recursive(n):\n if n <= 1:\n return 1\n return n * factorial_recursive(n - 1)\n\n# Iterative approach \ndef factorial_iterative(n):\n result = 1\n for i in range(1, n + 1):\n result *= i\n return result\n\n# Example usage\nprint(factorial_recursive(5)) # 120\nprint(factorial_iterative(5)) # 120\n```\n\nBoth functions work well, but the iterative version is more memory efficient for large numbers."
64
+ },
65
+ {
66
+ "role": "user",
67
+ "content": "What about using math.factorial()?"
68
+ },
69
+ {
70
+ "role": "assistant",
71
+ "content": "Great point! Python's built-in `math.factorial()` is the most efficient option:\n\n```python\nimport math\n\nresult = math.factorial(5) # 120\n```\n\nIt's optimized, handles edge cases, and is the recommended approach for production code. Use custom implementations mainly for learning or when you need specific behavior."
72
+ },
73
+ {
74
+ "role": "user",
75
+ "content": "What's the weather like in New York today?"
76
+ },
77
+ {
78
+ "type": "function_call",
79
+ "name": "get_weather",
80
+ "call_id": "call_weather_123",
81
+ "arguments": "{\"location\": \"New York, NY\", \"units\": \"fahrenheit\"}"
82
+ },
83
+ {
84
+ "type": "function_call_output",
85
+ "call_id": "call_weather_123",
86
+ "output": "{\"location\": \"New York, NY\", \"temperature\": 72, \"condition\": \"Partly Cloudy\", \"humidity\": 65, \"wind_speed\": 8, \"visibility\": 10, \"uv_index\": 6, \"feels_like\": 75}"
87
+ },
88
+ {
89
+ "role": "assistant",
90
+ "content": "The weather in New York today is quite pleasant! Here are the details:\n\n🌤️ **Partly Cloudy** - 72°F (feels like 75°F)\n💨 Wind: 8 mph\n💧 Humidity: 65%\n👁️ Visibility: 10 miles\n☀️ UV Index: 6 (moderate)\n\nIt's a nice day to be outside!"
91
+ },
92
+ {
93
+ "role": "user",
94
+ "content": "Can you also check the forecast for tomorrow and send me an email reminder?"
95
+ },
96
+ {
97
+ "type": "function_call",
98
+ "name": "get_weather_forecast",
99
+ "call_id": "call_forecast_456",
100
+ "arguments": "{\"location\": \"New York, NY\", \"days\": 1}"
101
+ },
102
+ {
103
+ "type": "function_call_output",
104
+ "call_id": "call_forecast_456",
105
+ "output": "{\"location\": \"New York, NY\", \"forecast\": [{\"date\": \"2024-10-17\", \"high\": 68, \"low\": 58, \"condition\": \"Light Rain\", \"precipitation_chance\": 80, \"wind_speed\": 12}]}"
106
+ },
107
+ {
108
+ "type": "function_call",
109
+ "name": "send_email",
110
+ "call_id": "call_email_789",
111
+ "arguments": "{\"to\": \"user@example.com\", \"subject\": \"Weather Reminder: Bring an Umbrella Tomorrow!\", \"body\": \"Tomorrow's forecast for New York: Light rain expected with 80% chance of precipitation. High: 68°F, Low: 58°F. Don't forget your umbrella!\"}"
112
+ },
113
+ {
114
+ "type": "function_call_output",
115
+ "call_id": "call_email_789",
116
+ "output": "{\"status\": \"sent\", \"message_id\": \"msg_12345\", \"delivered_at\": \"2024-10-16T20:45:30Z\"}"
117
+ }
118
+ ]
119
+
120
+ with gr.Blocks(title="AI Context Visualization Demo") as demo:
121
+ gr.Markdown("# AI Context Component Demo")
122
+ gr.Markdown("Shows token visualization for OpenAI-style conversation messages")
123
+
124
+ with gr.Row():
125
+ with gr.Column(scale=2):
126
+ json_editor = gr.JSON(
127
+ value=initial_messages,
128
+ label="Raw Messages (Editable)",
129
+ show_label=True
130
+ )
131
+
132
+ with gr.Column(scale=1):
133
+ context_viz = AIContext(
134
+ value=initial_messages,
135
+ count_tokens_fn=count_tokens
136
+ )
137
+
138
+ json_editor.change(
139
+ fn=lambda x: x,
140
+ inputs=[json_editor],
141
+ outputs=[context_viz]
142
+ )
143
+
144
+
145
+ if __name__ == "__main__":
146
+ demo.launch()
147
+
148
+ ```
149
+ """, elem_classes=["md-custom"], header_links=True)
150
+
151
+
152
+ gr.Markdown("""
153
+ ## `AIContext`
154
+
155
+ ### Initialization
156
+ """, elem_classes=["md-custom"], header_links=True)
157
+
158
+ gr.ParamViewer(value=_docs["AIContext"]["members"]["__init__"], linkify=[])
159
+
160
+
161
+ gr.Markdown("### Events")
162
+ gr.ParamViewer(value=_docs["AIContext"]["events"], linkify=['Event'])
163
+
164
+
165
+
166
+
167
+ gr.Markdown("""
168
+
169
+ ### User function
170
+
171
+ The impact on the users predict function varies depending on whether the component is used as an input or output for an event (or both).
172
+
173
+ - When used as an Input, the component only impacts the input signature of the user function.
174
+ - When used as an output, the component only impacts the return signature of the user function.
175
+
176
+ The code snippet below is accurate in cases where the component is used as both an input and an output.
177
+
178
+ - **As input:** Is passed, passes the data through unchanged.
179
+ - **As output:** Should return, messages list or dict containing messages.
180
+
181
+ ```python
182
+ def predict(
183
+ value: typing.Any
184
+ ) -> typing.Any:
185
+ return value
186
+ ```
187
+ """, elem_classes=["md-custom", "AIContext-user-fn"], header_links=True)
188
+
189
+
190
+
191
+
192
+ demo.load(None, js=r"""function() {
193
+ const refs = {};
194
+ const user_fn_refs = {
195
+ AIContext: [], };
196
+ requestAnimationFrame(() => {
197
+
198
+ Object.entries(user_fn_refs).forEach(([key, refs]) => {
199
+ if (refs.length > 0) {
200
+ const el = document.querySelector(`.${key}-user-fn`);
201
+ if (!el) return;
202
+ refs.forEach(ref => {
203
+ el.innerHTML = el.innerHTML.replace(
204
+ new RegExp("\\b"+ref+"\\b", "g"),
205
+ `<a href="#h-${ref.toLowerCase()}">${ref}</a>`
206
+ );
207
+ })
208
+ }
209
+ })
210
+
211
+ Object.entries(refs).forEach(([key, refs]) => {
212
+ if (refs.length > 0) {
213
+ const el = document.querySelector(`.${key}`);
214
+ if (!el) return;
215
+ refs.forEach(ref => {
216
+ el.innerHTML = el.innerHTML.replace(
217
+ new RegExp("\\b"+ref+"\\b", "g"),
218
+ `<a href="#h-${ref.toLowerCase()}">${ref}</a>`
219
+ );
220
+ })
221
+ }
222
+ })
223
+ })
224
+ }
225
+
226
+ """)
227
+
228
+ demo.launch()
src/.gitignore ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ .eggs/
2
+ dist/
3
+ *.pyc
4
+ __pycache__/
5
+ *.py[cod]
6
+ *$py.class
7
+ __tmp/*
8
+ *.pyi
9
+ .mypycache
10
+ .ruff_cache
11
+ node_modules
12
+ backend/**/templates/
src/README.md ADDED
@@ -0,0 +1,344 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ---
3
+ title: Gradio AI Context
4
+ sdk: gradio
5
+ sdk_version: 5.49.0
6
+ app_file: app.py
7
+ pinned: false
8
+ ---
9
+
10
+ # `gradio_aicontext`
11
+ <img alt="Static Badge" src="https://img.shields.io/badge/version%20-%200.0.1%20-%20orange">
12
+
13
+ Custom gradio component to visualize your ai context by colours and tokens' size.
14
+
15
+ ## Installation
16
+
17
+ ```bash
18
+ pip install gradio_aicontext
19
+ ```
20
+
21
+ ## Usage
22
+
23
+ ```python
24
+ import gradio as gr
25
+ from gradio_aicontext import AIContext
26
+ import json
27
+ import tiktoken
28
+
29
+
30
+ def count_tokens(message: dict) -> int:
31
+ encoding = tiktoken.encoding_for_model("gpt-4")
32
+ return encoding.encode(json.dumps([message])).__len__()
33
+
34
+
35
+ initial_messages = [
36
+ {
37
+ "role": "system",
38
+ "content": "You are a helpful AI assistant skilled in Python programming and data analysis."
39
+ },
40
+ {
41
+ "role": "user",
42
+ "content": "Can you help me write a function to calculate the factorial of a number?"
43
+ },
44
+ {
45
+ "role": "assistant",
46
+ "content": "I'll help you create a factorial function. Here are two approaches:\n\n```python\n# Recursive approach\ndef factorial_recursive(n):\n if n <= 1:\n return 1\n return n * factorial_recursive(n - 1)\n\n# Iterative approach \ndef factorial_iterative(n):\n result = 1\n for i in range(1, n + 1):\n result *= i\n return result\n\n# Example usage\nprint(factorial_recursive(5)) # 120\nprint(factorial_iterative(5)) # 120\n```\n\nBoth functions work well, but the iterative version is more memory efficient for large numbers."
47
+ },
48
+ {
49
+ "role": "user",
50
+ "content": "What about using math.factorial()?"
51
+ },
52
+ {
53
+ "role": "assistant",
54
+ "content": "Great point! Python's built-in `math.factorial()` is the most efficient option:\n\n```python\nimport math\n\nresult = math.factorial(5) # 120\n```\n\nIt's optimized, handles edge cases, and is the recommended approach for production code. Use custom implementations mainly for learning or when you need specific behavior."
55
+ },
56
+ {
57
+ "role": "user",
58
+ "content": "What's the weather like in New York today?"
59
+ },
60
+ {
61
+ "type": "function_call",
62
+ "name": "get_weather",
63
+ "call_id": "call_weather_123",
64
+ "arguments": "{\"location\": \"New York, NY\", \"units\": \"fahrenheit\"}"
65
+ },
66
+ {
67
+ "type": "function_call_output",
68
+ "call_id": "call_weather_123",
69
+ "output": "{\"location\": \"New York, NY\", \"temperature\": 72, \"condition\": \"Partly Cloudy\", \"humidity\": 65, \"wind_speed\": 8, \"visibility\": 10, \"uv_index\": 6, \"feels_like\": 75}"
70
+ },
71
+ {
72
+ "role": "assistant",
73
+ "content": "The weather in New York today is quite pleasant! Here are the details:\n\n🌤️ **Partly Cloudy** - 72°F (feels like 75°F)\n💨 Wind: 8 mph\n💧 Humidity: 65%\n👁️ Visibility: 10 miles\n☀️ UV Index: 6 (moderate)\n\nIt's a nice day to be outside!"
74
+ },
75
+ {
76
+ "role": "user",
77
+ "content": "Can you also check the forecast for tomorrow and send me an email reminder?"
78
+ },
79
+ {
80
+ "type": "function_call",
81
+ "name": "get_weather_forecast",
82
+ "call_id": "call_forecast_456",
83
+ "arguments": "{\"location\": \"New York, NY\", \"days\": 1}"
84
+ },
85
+ {
86
+ "type": "function_call_output",
87
+ "call_id": "call_forecast_456",
88
+ "output": "{\"location\": \"New York, NY\", \"forecast\": [{\"date\": \"2024-10-17\", \"high\": 68, \"low\": 58, \"condition\": \"Light Rain\", \"precipitation_chance\": 80, \"wind_speed\": 12}]}"
89
+ },
90
+ {
91
+ "type": "function_call",
92
+ "name": "send_email",
93
+ "call_id": "call_email_789",
94
+ "arguments": "{\"to\": \"user@example.com\", \"subject\": \"Weather Reminder: Bring an Umbrella Tomorrow!\", \"body\": \"Tomorrow's forecast for New York: Light rain expected with 80% chance of precipitation. High: 68°F, Low: 58°F. Don't forget your umbrella!\"}"
95
+ },
96
+ {
97
+ "type": "function_call_output",
98
+ "call_id": "call_email_789",
99
+ "output": "{\"status\": \"sent\", \"message_id\": \"msg_12345\", \"delivered_at\": \"2024-10-16T20:45:30Z\"}"
100
+ }
101
+ ]
102
+
103
+ with gr.Blocks(title="AI Context Visualization Demo") as demo:
104
+ gr.Markdown("# AI Context Component Demo")
105
+ gr.Markdown("Shows token visualization for OpenAI-style conversation messages")
106
+
107
+ with gr.Row():
108
+ with gr.Column(scale=2):
109
+ json_editor = gr.JSON(
110
+ value=initial_messages,
111
+ label="Raw Messages (Editable)",
112
+ show_label=True
113
+ )
114
+
115
+ with gr.Column(scale=1):
116
+ context_viz = AIContext(
117
+ value=initial_messages,
118
+ count_tokens_fn=count_tokens
119
+ )
120
+
121
+ json_editor.change(
122
+ fn=lambda x: x,
123
+ inputs=[json_editor],
124
+ outputs=[context_viz]
125
+ )
126
+
127
+
128
+ if __name__ == "__main__":
129
+ demo.launch()
130
+
131
+ ```
132
+
133
+ ## `AIContext`
134
+
135
+ ### Initialization
136
+
137
+ <table>
138
+ <thead>
139
+ <tr>
140
+ <th align="left">name</th>
141
+ <th align="left" style="width: 25%;">type</th>
142
+ <th align="left">default</th>
143
+ <th align="left">description</th>
144
+ </tr>
145
+ </thead>
146
+ <tbody>
147
+ <tr>
148
+ <td align="left"><code>value</code></td>
149
+ <td align="left" style="width: 25%;">
150
+
151
+ ```python
152
+ list | dict | Callable | None
153
+ ```
154
+
155
+ </td>
156
+ <td align="left"><code>None</code></td>
157
+ <td align="left">list of messages or dict containing messages. If a function is provided, the function will be called each time the app loads to set the initial value of this component.</td>
158
+ </tr>
159
+
160
+ <tr>
161
+ <td align="left"><code>count_tokens_fn</code></td>
162
+ <td align="left" style="width: 25%;">
163
+
164
+ ```python
165
+ Callable[[Any], int] | None
166
+ ```
167
+
168
+ </td>
169
+ <td align="left"><code>None</code></td>
170
+ <td align="left">function to count tokens in a message. If None, uses character count / 4 heuristic.</td>
171
+ </tr>
172
+
173
+ <tr>
174
+ <td align="left"><code>label</code></td>
175
+ <td align="left" style="width: 25%;">
176
+
177
+ ```python
178
+ str | I18nData | None
179
+ ```
180
+
181
+ </td>
182
+ <td align="left"><code>"AI Context"</code></td>
183
+ <td align="left">the label for this component, displayed above the component if `show_label` is `True`.</td>
184
+ </tr>
185
+
186
+ <tr>
187
+ <td align="left"><code>every</code></td>
188
+ <td align="left" style="width: 25%;">
189
+
190
+ ```python
191
+ Timer | float | None
192
+ ```
193
+
194
+ </td>
195
+ <td align="left"><code>None</code></td>
196
+ <td align="left">Continously calls `value` to recalculate it if `value` is a function (has no effect otherwise).</td>
197
+ </tr>
198
+
199
+ <tr>
200
+ <td align="left"><code>inputs</code></td>
201
+ <td align="left" style="width: 25%;">
202
+
203
+ ```python
204
+ Component | Sequence[Component] | set[Component] | None
205
+ ```
206
+
207
+ </td>
208
+ <td align="left"><code>None</code></td>
209
+ <td align="left">Components that are used as inputs to calculate `value` if `value` is a function (has no effect otherwise).</td>
210
+ </tr>
211
+
212
+ <tr>
213
+ <td align="left"><code>scale</code></td>
214
+ <td align="left" style="width: 25%;">
215
+
216
+ ```python
217
+ int | None
218
+ ```
219
+
220
+ </td>
221
+ <td align="left"><code>None</code></td>
222
+ <td align="left">relative size compared to adjacent Components.</td>
223
+ </tr>
224
+
225
+ <tr>
226
+ <td align="left"><code>min_width</code></td>
227
+ <td align="left" style="width: 25%;">
228
+
229
+ ```python
230
+ int
231
+ ```
232
+
233
+ </td>
234
+ <td align="left"><code>160</code></td>
235
+ <td align="left">minimum pixel width.</td>
236
+ </tr>
237
+
238
+ <tr>
239
+ <td align="left"><code>interactive</code></td>
240
+ <td align="left" style="width: 25%;">
241
+
242
+ ```python
243
+ bool | None
244
+ ```
245
+
246
+ </td>
247
+ <td align="left"><code>False</code></td>
248
+ <td align="left">if True, will be rendered as interactive; if False, will be read-only.</td>
249
+ </tr>
250
+
251
+ <tr>
252
+ <td align="left"><code>visible</code></td>
253
+ <td align="left" style="width: 25%;">
254
+
255
+ ```python
256
+ bool | Literal["hidden"]
257
+ ```
258
+
259
+ </td>
260
+ <td align="left"><code>True</code></td>
261
+ <td align="left">If False, component will be hidden.</td>
262
+ </tr>
263
+
264
+ <tr>
265
+ <td align="left"><code>elem_id</code></td>
266
+ <td align="left" style="width: 25%;">
267
+
268
+ ```python
269
+ str | None
270
+ ```
271
+
272
+ </td>
273
+ <td align="left"><code>None</code></td>
274
+ <td align="left">An optional string that is assigned as the id of this component in the HTML DOM.</td>
275
+ </tr>
276
+
277
+ <tr>
278
+ <td align="left"><code>elem_classes</code></td>
279
+ <td align="left" style="width: 25%;">
280
+
281
+ ```python
282
+ list[str] | str | None
283
+ ```
284
+
285
+ </td>
286
+ <td align="left"><code>None</code></td>
287
+ <td align="left">An optional list of strings that are assigned as the classes of this component in the HTML DOM.</td>
288
+ </tr>
289
+
290
+ <tr>
291
+ <td align="left"><code>render</code></td>
292
+ <td align="left" style="width: 25%;">
293
+
294
+ ```python
295
+ bool
296
+ ```
297
+
298
+ </td>
299
+ <td align="left"><code>True</code></td>
300
+ <td align="left">If False, component will not render be rendered in the Blocks context.</td>
301
+ </tr>
302
+
303
+ <tr>
304
+ <td align="left"><code>key</code></td>
305
+ <td align="left" style="width: 25%;">
306
+
307
+ ```python
308
+ int | str | tuple[int | str, ...] | None
309
+ ```
310
+
311
+ </td>
312
+ <td align="left"><code>None</code></td>
313
+ <td align="left">in a gr.render, Components with the same key across re-renders are treated as the same component.</td>
314
+ </tr>
315
+ </tbody></table>
316
+
317
+
318
+ ### Events
319
+
320
+ | name | description |
321
+ |:-----|:------------|
322
+ | `change` | Triggered when the value of the AIContext changes either because of user input (e.g. a user types in a textbox) OR because of a function update (e.g. an image receives a value from the output of an event trigger). See `.input()` for a listener that is only triggered by user input. |
323
+
324
+
325
+
326
+ ### User function
327
+
328
+ The impact on the users predict function varies depending on whether the component is used as an input or output for an event (or both).
329
+
330
+ - When used as an Input, the component only impacts the input signature of the user function.
331
+ - When used as an output, the component only impacts the return signature of the user function.
332
+
333
+ The code snippet below is accurate in cases where the component is used as both an input and an output.
334
+
335
+ - **As output:** Is passed, passes the data through unchanged.
336
+ - **As input:** Should return, messages list or dict containing messages.
337
+
338
+ ```python
339
+ def predict(
340
+ value: typing.Any
341
+ ) -> typing.Any:
342
+ return value
343
+ ```
344
+
src/backend/gradio_aicontext/__init__.py ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+
2
+ from .aicontext import AIContext
3
+
4
+ __all__ = ['AIContext']
src/backend/gradio_aicontext/aicontext.py ADDED
@@ -0,0 +1,132 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from __future__ import annotations
2
+
3
+ from collections.abc import Callable, Sequence
4
+ from typing import TYPE_CHECKING, Any, Literal
5
+
6
+ from gradio.components.base import Component
7
+ from gradio.events import Events
8
+ from gradio.i18n import I18nData
9
+
10
+ if TYPE_CHECKING:
11
+ from gradio.components import Timer
12
+
13
+
14
+ class AIContext(Component):
15
+ """
16
+ Creates an AI context visualization component showing message stack with token counts.
17
+ """
18
+
19
+ EVENTS = [
20
+ Events.change,
21
+ ]
22
+
23
+ def __init__(
24
+ self,
25
+ value: list | dict | Callable | None = None,
26
+ *,
27
+ count_tokens_fn: Callable[[Any], int] | None = None,
28
+ label: str | I18nData | None = "AI Context",
29
+ every: Timer | float | None = None,
30
+ inputs: Component | Sequence[Component] | set[Component] | None = None,
31
+ scale: int | None = None,
32
+ min_width: int = 160,
33
+ interactive: bool | None = False,
34
+ visible: bool | Literal["hidden"] = True,
35
+ elem_id: str | None = None,
36
+ elem_classes: list[str] | str | None = None,
37
+ render: bool = True,
38
+ key: int | str | tuple[int | str, ...] | None = None,
39
+ ):
40
+ """
41
+ Parameters:
42
+ value: list of messages or dict containing messages. If a function is provided, the function will be called each time the app loads to set the initial value of this component.
43
+ count_tokens_fn: function to count tokens in a message. If None, uses character count / 4 heuristic.
44
+ label: the label for this component, displayed above the component if `show_label` is `True`.
45
+ every: Continously calls `value` to recalculate it if `value` is a function (has no effect otherwise).
46
+ inputs: Components that are used as inputs to calculate `value` if `value` is a function (has no effect otherwise).
47
+ show_label: if True, will display label.
48
+ scale: relative size compared to adjacent Components.
49
+ min_width: minimum pixel width.
50
+ interactive: if True, will be rendered as interactive; if False, will be read-only.
51
+ visible: If False, component will be hidden.
52
+ elem_id: An optional string that is assigned as the id of this component in the HTML DOM.
53
+ elem_classes: An optional list of strings that are assigned as the classes of this component in the HTML DOM.
54
+ render: If False, component will not render be rendered in the Blocks context.
55
+ key: in a gr.render, Components with the same key across re-renders are treated as the same component.
56
+ """
57
+ self.count_tokens_fn = count_tokens_fn
58
+ super().__init__(
59
+ label=label,
60
+ every=every,
61
+ inputs=inputs,
62
+ show_label=False,
63
+ scale=scale,
64
+ min_width=min_width,
65
+ interactive=interactive,
66
+ visible=visible,
67
+ elem_id=elem_id,
68
+ elem_classes=elem_classes,
69
+ value=value,
70
+ render=render,
71
+ key=key,
72
+ )
73
+
74
+ def preprocess(self, payload: Any) -> Any:
75
+ """
76
+ Parameters:
77
+ payload: the data from the frontend.
78
+ Returns:
79
+ Passes the data through unchanged.
80
+ """
81
+ return payload
82
+
83
+ def postprocess(self, value: Any) -> dict[str, Any] | None:
84
+ """
85
+ Parameters:
86
+ value: messages list or dict containing messages.
87
+ Returns:
88
+ The formatted data for the frontend with token counts.
89
+ """
90
+ if value is None:
91
+ return {"messages": [], "tokens_count": []}
92
+
93
+ messages = []
94
+ if isinstance(value, list):
95
+ messages = value
96
+ elif isinstance(value, dict) and "messages" in value:
97
+ messages = value["messages"]
98
+ else:
99
+ messages = [value] if value else []
100
+
101
+ tokens_count = [self._count_tokens(msg) for msg in messages]
102
+
103
+ return {"messages": messages, "tokens_count": tokens_count}
104
+
105
+ def _count_tokens(self, message: dict) -> int:
106
+ """Count tokens in a message using the configured function or default heuristic."""
107
+ if self.count_tokens_fn:
108
+ return self.count_tokens_fn(message)
109
+
110
+ import json
111
+
112
+ content = json.dumps(message) if isinstance(message, dict) else str(message)
113
+ return max(1, len(content) // 4)
114
+
115
+ def api_info(self) -> dict[str, Any]:
116
+ return {"type": "object"}
117
+
118
+ def example_payload(self) -> Any:
119
+ return {
120
+ "messages": [
121
+ {"role": "user", "content": "Hello!"},
122
+ {"role": "assistant", "content": "Hi there!"},
123
+ ]
124
+ }
125
+
126
+ def example_value(self) -> Any:
127
+ return {
128
+ "messages": [
129
+ {"role": "user", "content": "Hello!"},
130
+ {"role": "assistant", "content": "Hi there!"},
131
+ ]
132
+ }
src/backend/gradio_aicontext/templates/component/index.js ADDED
The diff for this file is too large to render. See raw diff
 
src/backend/gradio_aicontext/templates/component/style.css ADDED
@@ -0,0 +1 @@
 
 
1
+ .block.svelte-239wnu{position:relative;margin:0;box-shadow:var(--block-shadow);border-width:var(--block-border-width);border-color:var(--block-border-color);border-radius:var(--block-radius);background:var(--block-background-fill);width:100%;line-height:var(--line-sm)}.block.fullscreen.svelte-239wnu{border-radius:0}.auto-margin.svelte-239wnu{margin-left:auto;margin-right:auto}.block.border_focus.svelte-239wnu{border-color:var(--color-accent)}.block.border_contrast.svelte-239wnu{border-color:var(--body-text-color)}.padded.svelte-239wnu{padding:var(--block-padding)}.hidden.svelte-239wnu{display:none}.flex.svelte-239wnu{display:flex;flex-direction:column}.hide-container.svelte-239wnu:not(.fullscreen){margin:0;box-shadow:none;--block-border-width:0;background:transparent;padding:0;overflow:visible}.resize-handle.svelte-239wnu{position:absolute;bottom:0;right:0;width:10px;height:10px;fill:var(--block-border-color);cursor:nwse-resize}.fullscreen.svelte-239wnu{position:fixed;top:0;left:0;width:100vw;height:100vh;z-index:1000;overflow:auto}.animating.svelte-239wnu{animation:svelte-239wnu-pop-out .1s ease-out forwards}@keyframes svelte-239wnu-pop-out{0%{position:fixed;top:var(--start-top);left:var(--start-left);width:var(--start-width);height:var(--start-height);z-index:100}to{position:fixed;top:0vh;left:0vw;width:100vw;height:100vh;z-index:1000}}.placeholder.svelte-239wnu{border-radius:var(--block-radius);border-width:var(--block-border-width);border-color:var(--block-border-color);border-style:dashed}Tables */ table,tr,td,th{margin-top:var(--spacing-sm);margin-bottom:var(--spacing-sm);padding:var(--spacing-xl)}.md code,.md pre{background:none;font-family:var(--font-mono);font-size:var(--text-sm);text-align:left;white-space:pre;word-spacing:normal;word-break:normal;word-wrap:normal;line-height:1.5;-moz-tab-size:2;tab-size:2;-webkit-hyphens:none;hyphens:none}.md pre[class*=language-]::selection,.md pre[class*=language-] ::selection,.md code[class*=language-]::selection,.md code[class*=language-] ::selection{text-shadow:none;background:#b3d4fc}.md pre{padding:1em;margin:.5em 0;overflow:auto;position:relative;margin-top:var(--spacing-sm);margin-bottom:var(--spacing-sm);box-shadow:none;border:none;border-radius:var(--radius-md);background:var(--code-background-fill);padding:var(--spacing-xxl);font-family:var(--font-mono);text-shadow:none;border-radius:var(--radius-sm);white-space:nowrap;display:block;white-space:pre}.md :not(pre)>code{padding:.1em;border-radius:var(--radius-xs);white-space:normal;background:var(--code-background-fill);border:1px solid var(--panel-border-color);padding:var(--spacing-xxs) var(--spacing-xs)}.md .token.comment,.md .token.prolog,.md .token.doctype,.md .token.cdata{color:#708090}.md .token.punctuation{color:#999}.md .token.namespace{opacity:.7}.md .token.property,.md .token.tag,.md .token.boolean,.md .token.number,.md .token.constant,.md .token.symbol,.md .token.deleted{color:#905}.md .token.selector,.md .token.attr-name,.md .token.string,.md .token.char,.md .token.builtin,.md .token.inserted{color:#690}.md .token.atrule,.md .token.attr-value,.md .token.keyword{color:#07a}.md .token.function,.md .token.class-name{color:#dd4a68}.md .token.regex,.md .token.important,.md .token.variable{color:#e90}.md .token.important,.md .token.bold{font-weight:700}.md .token.italic{font-style:italic}.md .token.entity{cursor:help}.dark .md .token.comment,.dark .md .token.prolog,.dark .md .token.cdata{color:#5c6370}.dark .md .token.doctype,.dark .md .token.punctuation,.dark .md .token.entity{color:#abb2bf}.dark .md .token.attr-name,.dark .md .token.class-name,.dark .md .token.boolean,.dark .md .token.constant,.dark .md .token.number,.dark .md .token.atrule{color:#d19a66}.dark .md .token.keyword{color:#c678dd}.dark .md .token.property,.dark .md .token.tag,.dark .md .token.symbol,.dark .md .token.deleted,.dark .md .token.important{color:#e06c75}.dark .md .token.selector,.dark .md .token.string,.dark .md .token.char,.dark .md .token.builtin,.dark .md .token.inserted,.dark .md .token.regex,.dark .md .token.attr-value,.dark .md .token.attr-value>.token.punctuation{color:#98c379}.dark .md .token.variable,.dark .md .token.operator,.dark .md .token.function{color:#61afef}.dark .md .token.url{color:#56b6c2}span.svelte-1m32c2s div[class*=code_wrap]{position:relative}span.svelte-1m32c2s span.katex{font-size:var(--text-lg);direction:ltr}span.svelte-1m32c2s div[class*=code_wrap]>button{z-index:1;cursor:pointer;border-bottom-left-radius:var(--radius-sm);padding:var(--spacing-md);width:25px;height:25px;position:absolute;right:0}span.svelte-1m32c2s .check{opacity:0;z-index:var(--layer-top);transition:opacity .2s;background:var(--code-background-fill);color:var(--body-text-color);position:absolute;top:var(--size-1-5);left:var(--size-1-5)}span.svelte-1m32c2s p:not(:first-child){margin-top:var(--spacing-xxl)}span.svelte-1m32c2s .md-header-anchor{margin-left:-25px;padding-right:8px;line-height:1;color:var(--body-text-color-subdued);opacity:0}span.svelte-1m32c2s h1:hover .md-header-anchor,span.svelte-1m32c2s h2:hover .md-header-anchor,span.svelte-1m32c2s h3:hover .md-header-anchor,span.svelte-1m32c2s h4:hover .md-header-anchor,span.svelte-1m32c2s h5:hover .md-header-anchor,span.svelte-1m32c2s h6:hover .md-header-anchor{opacity:1}span.md.svelte-1m32c2s .md-header-anchor>svg{color:var(--body-text-color-subdued)}span.svelte-1m32c2s table{word-break:break-word}div.svelte-17qq50w>.md.prose{font-weight:var(--block-info-text-weight);font-size:var(--block-info-text-size);line-height:var(--line-sm)}div.svelte-17qq50w>.md.prose *{color:var(--block-info-text-color)}div.svelte-17qq50w{margin-bottom:var(--spacing-md)}span.has-info.svelte-zgrq3{margin-bottom:var(--spacing-xs)}span.svelte-zgrq3:not(.has-info){margin-bottom:var(--spacing-lg)}span.svelte-zgrq3{display:inline-block;position:relative;z-index:var(--layer-4);border:solid var(--block-title-border-width) var(--block-title-border-color);border-radius:var(--block-title-radius);background:var(--block-title-background-fill);padding:var(--block-title-padding);color:var(--block-title-text-color);font-weight:var(--block-title-text-weight);font-size:var(--block-title-text-size);line-height:var(--line-sm)}span[dir=rtl].svelte-zgrq3{display:block}.hide.svelte-zgrq3{margin:0;height:0}label.svelte-igqdol.svelte-igqdol{display:inline-flex;align-items:center;z-index:var(--layer-2);box-shadow:var(--block-label-shadow);border:var(--block-label-border-width) solid var(--block-label-border-color);border-top:none;border-left:none;border-radius:var(--block-label-radius);background:var(--block-label-background-fill);padding:var(--block-label-padding);pointer-events:none;color:var(--block-label-text-color);font-weight:var(--block-label-text-weight);font-size:var(--block-label-text-size);line-height:var(--line-sm)}.gr-group label.svelte-igqdol.svelte-igqdol{border-top-left-radius:0}label.float.svelte-igqdol.svelte-igqdol{position:absolute;top:var(--block-label-margin);left:var(--block-label-margin)}label.svelte-igqdol.svelte-igqdol:not(.float){position:static;margin-top:var(--block-label-margin);margin-left:var(--block-label-margin)}.hide.svelte-igqdol.svelte-igqdol{display:none}span.svelte-igqdol.svelte-igqdol{opacity:.8;margin-right:var(--size-2);width:calc(var(--block-label-text-size) - 1px);height:calc(var(--block-label-text-size) - 1px)}.hide-label.svelte-igqdol.svelte-igqdol{box-shadow:none;border-width:0;background:transparent;overflow:visible}label[dir=rtl].svelte-igqdol.svelte-igqdol{border:var(--block-label-border-width) solid var(--block-label-border-color);border-top:none;border-right:none;border-bottom-left-radius:var(--block-radius);border-bottom-right-radius:var(--block-label-radius);border-top-left-radius:var(--block-label-radius)}label[dir=rtl].svelte-igqdol span.svelte-igqdol{margin-left:var(--size-2);margin-right:0}.unstyled-link.svelte-151nsdd{all:unset;cursor:pointer}button.svelte-y0enk4{display:flex;justify-content:center;align-items:center;gap:1px;z-index:var(--layer-2);border-radius:var(--radius-xs);color:var(--block-label-text-color);border:1px solid var(--border-color);padding:var(--spacing-xxs)}button.svelte-y0enk4:hover{background-color:var(--background-fill-secondary)}button[disabled].svelte-y0enk4{opacity:.5;box-shadow:none}button[disabled].svelte-y0enk4:hover{cursor:not-allowed}.padded.svelte-y0enk4{background:var(--bg-color)}button.svelte-y0enk4:hover,button.highlight.svelte-y0enk4{cursor:pointer;color:var(--color-accent)}.padded.svelte-y0enk4:hover{color:var(--block-label-text-color)}span.svelte-y0enk4{padding:0 1px;font-size:10px}div.svelte-y0enk4{display:flex;align-items:center;justify-content:center;transition:filter .2s ease-in-out}.x-small.svelte-y0enk4{width:10px;height:10px}.small.svelte-y0enk4{width:14px;height:14px}.medium.svelte-y0enk4{width:20px;height:20px}.large.svelte-y0enk4{width:22px;height:22px}.pending.svelte-y0enk4{animation:svelte-y0enk4-flash .5s infinite}@keyframes svelte-y0enk4-flash{0%{opacity:.5}50%{opacity:1}to{opacity:.5}}.transparent.svelte-y0enk4{background:transparent;border:none;box-shadow:none}.empty.svelte-3w3rth{display:flex;justify-content:center;align-items:center;margin-top:calc(0px - var(--size-6));height:var(--size-full)}.icon.svelte-3w3rth{opacity:.5;height:var(--size-5);color:var(--body-text-color)}.small.svelte-3w3rth{min-height:calc(var(--size-32) - 20px)}.large.svelte-3w3rth{min-height:calc(var(--size-64) - 20px)}.unpadded_box.svelte-3w3rth{margin-top:0}.small_parent.svelte-3w3rth{min-height:100%!important}.dropdown-arrow.svelte-145leq6,.dropdown-arrow.svelte-ihhdbf{fill:currentColor}.circle.svelte-ihhdbf{fill:currentColor;opacity:.1}svg.svelte-pb9pol{animation:svelte-pb9pol-spin 1.5s linear infinite}@keyframes svelte-pb9pol-spin{0%{transform:rotate(0)}to{transform:rotate(360deg)}}h2.svelte-1xg7h5n{font-size:var(--text-xl)!important}p.svelte-1xg7h5n,h2.svelte-1xg7h5n{white-space:pre-line}.wrap.svelte-1xg7h5n{display:flex;flex-direction:column;justify-content:center;align-items:center;min-height:var(--size-60);color:var(--block-label-text-color);line-height:var(--line-md);height:100%;padding-top:var(--size-3);text-align:center;margin:auto var(--spacing-lg)}.or.svelte-1xg7h5n{color:var(--body-text-color-subdued);display:flex}.icon-wrap.svelte-1xg7h5n{width:30px;margin-bottom:var(--spacing-lg)}@media (--screen-md){.wrap.svelte-1xg7h5n{font-size:var(--text-lg)}}.hovered.svelte-1xg7h5n{color:var(--color-accent)}div.svelte-q32hvf{border-top:1px solid transparent;display:flex;max-height:100%;justify-content:center;align-items:center;gap:var(--spacing-sm);height:auto;align-items:flex-end;color:var(--block-label-text-color);flex-shrink:0}.show_border.svelte-q32hvf{border-top:1px solid var(--block-border-color);margin-top:var(--spacing-xxl);box-shadow:var(--shadow-drop)}.source-selection.svelte-15ls1gu{display:flex;align-items:center;justify-content:center;border-top:1px solid var(--border-color-primary);width:100%;margin-left:auto;margin-right:auto;height:var(--size-10)}.icon.svelte-15ls1gu{width:22px;height:22px;margin:var(--spacing-lg) var(--spacing-xs);padding:var(--spacing-xs);color:var(--neutral-400);border-radius:var(--radius-md)}.selected.svelte-15ls1gu{color:var(--color-accent)}.icon.svelte-15ls1gu:hover,.icon.svelte-15ls1gu:focus{color:var(--color-accent)}.icon-button-wrapper.svelte-1h0hs6p{display:flex;flex-direction:row;align-items:center;justify-content:center;z-index:var(--layer-2);gap:var(--spacing-sm);box-shadow:var(--shadow-drop);border:1px solid var(--border-color-primary);background:var(--block-background-fill);padding:var(--spacing-xxs)}.icon-button-wrapper.hide-top-corner.svelte-1h0hs6p{border-top:none;border-right:none;border-radius:var(--block-label-right-radius)}.icon-button-wrapper.display-top-corner.svelte-1h0hs6p{border-radius:var(--radius-sm) 0 0 var(--radius-sm);top:var(--spacing-sm);right:-1px}.icon-button-wrapper.svelte-1h0hs6p:not(.top-panel){border:1px solid var(--border-color-primary);border-radius:var(--radius-sm)}.top-panel.svelte-1h0hs6p{position:absolute;top:var(--block-label-margin);right:var(--block-label-margin);margin:0}.icon-button-wrapper.svelte-1h0hs6p button{margin:var(--spacing-xxs);border-radius:var(--radius-xs);position:relative}.icon-button-wrapper.svelte-1h0hs6p a.download-link:not(:last-child),.icon-button-wrapper.svelte-1h0hs6p button:not(:last-child){margin-right:var(--spacing-xxs)}.icon-button-wrapper.svelte-1h0hs6p a.download-link:not(:last-child):not(.no-border *):after,.icon-button-wrapper.svelte-1h0hs6p button:not(:last-child):not(.no-border *):after{content:"";position:absolute;right:-4.5px;top:15%;height:70%;width:1px;background-color:var(--border-color-primary)}.icon-button-wrapper.svelte-1h0hs6p>*{height:100%}svg.svelte-43sxxs.svelte-43sxxs{width:var(--size-20);height:var(--size-20)}svg.svelte-43sxxs path.svelte-43sxxs{fill:var(--loader-color)}div.svelte-43sxxs.svelte-43sxxs{z-index:var(--layer-2)}.margin.svelte-43sxxs.svelte-43sxxs{margin:var(--size-4)}.wrap.svelte-vusapu.svelte-vusapu{display:flex;flex-direction:column;justify-content:center;align-items:center;z-index:var(--layer-3);transition:opacity .1s ease-in-out;border-radius:var(--block-radius);background:var(--block-background-fill);padding:0 var(--size-6);overflow:hidden;pointer-events:none}.wrap.center.svelte-vusapu.svelte-vusapu{top:0;right:0;left:0}.wrap.default.svelte-vusapu.svelte-vusapu{top:0;right:0;bottom:0;left:0}.hide.svelte-vusapu.svelte-vusapu{opacity:0;pointer-events:none}.generating.svelte-vusapu.svelte-vusapu{animation:svelte-vusapu-pulseStart 1s cubic-bezier(.4,0,.6,1),svelte-vusapu-pulse 2s cubic-bezier(.4,0,.6,1) 1s infinite;border:2px solid var(--color-accent);background:transparent;z-index:var(--layer-1);pointer-events:none}.translucent.svelte-vusapu.svelte-vusapu{background:none}@keyframes svelte-vusapu-pulseStart{0%{opacity:0}to{opacity:1}}@keyframes svelte-vusapu-pulse{0%,to{opacity:1}50%{opacity:.5}}.loading.svelte-vusapu.svelte-vusapu{z-index:var(--layer-2);color:var(--body-text-color)}.eta-bar.svelte-vusapu.svelte-vusapu{position:absolute;top:0;right:0;bottom:0;left:0;transform-origin:left;opacity:.8;z-index:var(--layer-1);transition:10ms;background:var(--background-fill-secondary)}.progress-bar-wrap.svelte-vusapu.svelte-vusapu{border:1px solid var(--border-color-primary);background:var(--background-fill-primary);width:55.5%;height:var(--size-4)}.progress-bar.svelte-vusapu.svelte-vusapu{transform-origin:left;background-color:var(--loader-color);width:var(--size-full);height:var(--size-full)}.progress-level.svelte-vusapu.svelte-vusapu{display:flex;flex-direction:column;align-items:center;gap:1;z-index:var(--layer-2);width:var(--size-full)}.progress-level-inner.svelte-vusapu.svelte-vusapu{margin:var(--size-2) auto;color:var(--body-text-color);font-size:var(--text-sm);font-family:var(--font-mono)}.meta-text.svelte-vusapu.svelte-vusapu{position:absolute;bottom:0;right:0;z-index:var(--layer-2);padding:var(--size-1) var(--size-2);font-size:var(--text-sm);font-family:var(--font-mono)}.meta-text-center.svelte-vusapu.svelte-vusapu{display:flex;position:absolute;top:0;right:0;justify-content:center;align-items:center;transform:translateY(var(--size-6));z-index:var(--layer-2);padding:var(--size-1) var(--size-2);font-size:var(--text-sm);font-family:var(--font-mono);text-align:center}.error.svelte-vusapu.svelte-vusapu{box-shadow:var(--shadow-drop);border:solid 1px var(--error-border-color);border-radius:var(--radius-full);background:var(--error-background-fill);padding-right:var(--size-4);padding-left:var(--size-4);color:var(--error-text-color);font-weight:var(--weight-semibold);font-size:var(--text-lg);line-height:var(--line-lg);font-family:var(--font)}.validation-error.svelte-vusapu.svelte-vusapu{pointer-events:auto;color:var(--error-text-color);font-weight:var(--weight-semibold);font-size:var(--text-lg);line-height:var(--line-lg);font-family:var(--font);position:absolute;background:var(--error-background-fill);top:0;right:0;z-index:var(--layer-3);padding:var(--size-1) var(--size-2);font-size:var(--text-md);text-align:center;border-bottom-left-radius:var(--radius-sm);border-bottom:1px solid var(--error-border-color);border-left:1px solid var(--error-border-color);display:flex;justify-content:space-between;align-items:center;gap:var(--spacing-xl)}.minimal.svelte-vusapu.svelte-vusapu{pointer-events:none}.minimal.svelte-vusapu .progress-text.svelte-vusapu{background:var(--block-background-fill)}.border.svelte-vusapu.svelte-vusapu{border:1px solid var(--border-color-primary)}.clear-status.svelte-vusapu.svelte-vusapu{position:absolute;display:flex;top:var(--size-2);right:var(--size-2);justify-content:flex-end;gap:var(--spacing-sm);z-index:var(--layer-1)}.toast-body.svelte-syezpc{display:flex;position:relative;right:0;left:0;align-items:center;margin:var(--size-6) var(--size-4);margin:auto;border-radius:var(--container-radius);overflow:hidden;pointer-events:auto}.toast-body.error.svelte-syezpc{border:1px solid var(--color-red-700);background:var(--color-red-50)}.dark .toast-body.error.svelte-syezpc{border:1px solid var(--color-red-500);background-color:var(--color-grey-950)}.toast-body.warning.svelte-syezpc{border:1px solid var(--color-yellow-700);background:var(--color-yellow-50)}.dark .toast-body.warning.svelte-syezpc{border:1px solid var(--color-yellow-500);background-color:var(--color-grey-950)}.toast-body.info.svelte-syezpc{border:1px solid var(--color-grey-700);background:var(--color-grey-50)}.dark .toast-body.info.svelte-syezpc{border:1px solid var(--color-grey-500);background-color:var(--color-grey-950)}.toast-body.success.svelte-syezpc{border:1px solid var(--color-green-700);background:var(--color-green-50)}.dark .toast-body.success.svelte-syezpc{border:1px solid var(--color-green-500);background-color:var(--color-grey-950)}.toast-title.svelte-syezpc{display:flex;align-items:center;font-weight:var(--weight-bold);font-size:var(--text-lg);line-height:var(--line-sm)}.toast-title.error.svelte-syezpc{color:var(--color-red-700)}.dark .toast-title.error.svelte-syezpc{color:var(--color-red-50)}.toast-title.warning.svelte-syezpc{color:var(--color-yellow-700)}.dark .toast-title.warning.svelte-syezpc{color:var(--color-yellow-50)}.toast-title.info.svelte-syezpc{color:var(--color-grey-700)}.dark .toast-title.info.svelte-syezpc{color:var(--color-grey-50)}.toast-title.success.svelte-syezpc{color:var(--color-green-700)}.dark .toast-title.success.svelte-syezpc{color:var(--color-green-50)}.toast-close.svelte-syezpc{margin:0 var(--size-3);border-radius:var(--size-3);padding:0px var(--size-1-5);font-size:var(--size-5);line-height:var(--size-5)}.toast-close.error.svelte-syezpc{color:var(--color-red-700)}.dark .toast-close.error.svelte-syezpc{color:var(--color-red-500)}.toast-close.warning.svelte-syezpc{color:var(--color-yellow-700)}.dark .toast-close.warning.svelte-syezpc{color:var(--color-yellow-500)}.toast-close.info.svelte-syezpc{color:var(--color-grey-700)}.dark .toast-close.info.svelte-syezpc{color:var(--color-grey-500)}.toast-close.success.svelte-syezpc{color:var(--color-green-700)}.dark .toast-close.success.svelte-syezpc{color:var(--color-green-500)}.toast-text.svelte-syezpc{font-size:var(--text-lg);word-wrap:break-word;overflow-wrap:break-word;word-break:break-word}.toast-text.error.svelte-syezpc{color:var(--color-red-700)}.dark .toast-text.error.svelte-syezpc{color:var(--color-red-50)}.toast-text.warning.svelte-syezpc{color:var(--color-yellow-700)}.dark .toast-text.warning.svelte-syezpc{color:var(--color-yellow-50)}.toast-text.info.svelte-syezpc{color:var(--color-grey-700)}.dark .toast-text.info.svelte-syezpc{color:var(--color-grey-50)}.toast-text.success.svelte-syezpc{color:var(--color-green-700)}.dark .toast-text.success.svelte-syezpc{color:var(--color-green-50)}.toast-details.svelte-syezpc{margin:var(--size-3) var(--size-3) var(--size-3) 0;width:100%}.toast-icon.svelte-syezpc{display:flex;position:absolute;position:relative;flex-shrink:0;justify-content:center;align-items:center;margin:var(--size-2);border-radius:var(--radius-full);padding:var(--size-1);padding-left:calc(var(--size-1) - 1px);width:35px;height:35px}.toast-icon.error.svelte-syezpc{color:var(--color-red-700)}.dark .toast-icon.error.svelte-syezpc{color:var(--color-red-500)}.toast-icon.warning.svelte-syezpc{color:var(--color-yellow-700)}.dark .toast-icon.warning.svelte-syezpc{color:var(--color-yellow-500)}.toast-icon.info.svelte-syezpc{color:var(--color-grey-700)}.dark .toast-icon.info.svelte-syezpc{color:var(--color-grey-500)}.toast-icon.success.svelte-syezpc{color:var(--color-green-700)}.dark .toast-icon.success.svelte-syezpc{color:var(--color-green-500)}@keyframes svelte-syezpc-countdown{0%{transform:scaleX(1)}to{transform:scaleX(0)}}.timer.svelte-syezpc{position:absolute;bottom:0;left:0;transform-origin:0 0;animation:svelte-syezpc-countdown 10s linear forwards;width:100%;height:var(--size-1)}.timer.error.svelte-syezpc{background:var(--color-red-700)}.dark .timer.error.svelte-syezpc{background:var(--color-red-500)}.timer.warning.svelte-syezpc{background:var(--color-yellow-700)}.dark .timer.warning.svelte-syezpc{background:var(--color-yellow-500)}.timer.info.svelte-syezpc{background:var(--color-grey-700)}.dark .timer.info.svelte-syezpc{background:var(--color-grey-500)}.timer.success.svelte-syezpc{background:var(--color-green-700)}.dark .timer.success.svelte-syezpc{background:var(--color-green-500)}.hidden.svelte-syezpc{display:none}.toast-text.svelte-syezpc a{text-decoration:underline}.toast-wrap.svelte-je2isz{--toast-top:var(--size-4);display:flex;position:fixed;top:calc(var(--toast-top) + var(--size-4));right:var(--size-4);flex-direction:column;align-items:end;gap:var(--size-2);z-index:var(--layer-top);width:calc(100% - var(--size-8))}@media (--screen-sm){.toast-wrap.svelte-je2isz{width:calc(var(--size-96) + var(--size-10))}}.streaming-bar.svelte-ga0jj6{position:absolute;bottom:0;left:0;right:0;height:4px;background-color:var(--primary-600);animation:svelte-ga0jj6-countdown linear forwards;z-index:1}@keyframes svelte-ga0jj6-countdown{0%{transform:translate(0)}to{transform:translate(-100%)}}.bar.svelte-1f4spu{width:100%;margin:8px 0;border-radius:8px;display:flex;align-items:center;justify-content:center;font-size:12px;font-weight:600;transition:all .2s ease;cursor:default}.content.svelte-1f4spu{display:flex;align-items:center;justify-content:center;gap:6px;line-height:1.2}.type.svelte-1f4spu{font-size:12px;font-weight:700;color:#000}.func-name.svelte-1f4spu{color:#000;font-size:10px;margin:0}.tokens.svelte-1f4spu{font-size:10px;opacity:.9;font-weight:500;color:#000}.context-container.svelte-1mw96hn{display:flex;width:100%;flex-direction:column;background:var(--background-fill-secondary);border-radius:var(--block-radius);box-shadow:var(--shadow-drop);height:100%}.empty-state.svelte-1mw96hn{height:100%;width:100%;display:flex;align-items:center;justify-content:center;color:var(--neutral-500);font-size:14px;background:var(--background-fill-primary);border:1px solid var(--border-color-primary);border-radius:12px;padding:40px}.header.svelte-1mw96hn{display:flex;align-items:center;gap:10px;padding:16px 16px 12px;border-bottom:2px solid var(--border-color-primary);flex-wrap:wrap;flex-shrink:0}.title.svelte-1mw96hn{margin:0;color:var(--body-text-color);font-size:14px;font-weight:600}.count-badge.svelte-1mw96hn{background:var(--color-accent);color:#fff;padding:4px 10px;border-radius:12px;font-size:10px;font-weight:600}.token-count.svelte-1mw96hn{margin:0;color:var(--body-text-color);font-size:12px;font-weight:400}.bars-container.svelte-1mw96hn{flex:1;overflow-y:auto;padding:0 16px 16px;min-height:0}
src/backend/gradio_aicontext/templates/example/index.js ADDED
@@ -0,0 +1,127 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ const {
2
+ SvelteComponent: g,
3
+ add_iframe_resize_listener: v,
4
+ add_render_callback: y,
5
+ append_hydration: b,
6
+ attr: m,
7
+ binding_callbacks: w,
8
+ children: z,
9
+ claim_element: S,
10
+ claim_text: k,
11
+ detach: d,
12
+ element: p,
13
+ init: E,
14
+ insert_hydration: q,
15
+ noop: u,
16
+ safe_not_equal: C,
17
+ set_data: D,
18
+ text: I,
19
+ toggle_class: r
20
+ } = window.__gradio__svelte__internal, { onMount: M } = window.__gradio__svelte__internal;
21
+ function P(t) {
22
+ let e, l = o(
23
+ /*value*/
24
+ t[0]
25
+ ) + "", a, _;
26
+ return {
27
+ c() {
28
+ e = p("div"), a = I(l), this.h();
29
+ },
30
+ l(i) {
31
+ e = S(i, "DIV", { class: !0 });
32
+ var n = z(e);
33
+ a = k(n, l), n.forEach(d), this.h();
34
+ },
35
+ h() {
36
+ m(e, "class", "svelte-84cxb8"), y(() => (
37
+ /*div_elementresize_handler*/
38
+ t[5].call(e)
39
+ )), r(
40
+ e,
41
+ "table",
42
+ /*type*/
43
+ t[1] === "table"
44
+ ), r(
45
+ e,
46
+ "gallery",
47
+ /*type*/
48
+ t[1] === "gallery"
49
+ ), r(
50
+ e,
51
+ "selected",
52
+ /*selected*/
53
+ t[2]
54
+ );
55
+ },
56
+ m(i, n) {
57
+ q(i, e, n), b(e, a), _ = v(
58
+ e,
59
+ /*div_elementresize_handler*/
60
+ t[5].bind(e)
61
+ ), t[6](e);
62
+ },
63
+ p(i, [n]) {
64
+ n & /*value*/
65
+ 1 && l !== (l = o(
66
+ /*value*/
67
+ i[0]
68
+ ) + "") && D(a, l), n & /*type*/
69
+ 2 && r(
70
+ e,
71
+ "table",
72
+ /*type*/
73
+ i[1] === "table"
74
+ ), n & /*type*/
75
+ 2 && r(
76
+ e,
77
+ "gallery",
78
+ /*type*/
79
+ i[1] === "gallery"
80
+ ), n & /*selected*/
81
+ 4 && r(
82
+ e,
83
+ "selected",
84
+ /*selected*/
85
+ i[2]
86
+ );
87
+ },
88
+ i: u,
89
+ o: u,
90
+ d(i) {
91
+ i && d(e), _(), t[6](null);
92
+ }
93
+ };
94
+ }
95
+ function V(t, e) {
96
+ t.style.setProperty("--local-text-width", `${e && e < 150 ? e : 200}px`), t.style.whiteSpace = "unset";
97
+ }
98
+ function o(t, e = 60) {
99
+ if (!t) return "";
100
+ const l = String(t);
101
+ return l.length <= e ? l : l.slice(0, e) + "...";
102
+ }
103
+ function W(t, e, l) {
104
+ let { value: a } = e, { type: _ } = e, { selected: i = !1 } = e, n, c;
105
+ M(() => {
106
+ V(c, n);
107
+ });
108
+ function f() {
109
+ n = this.clientWidth, l(3, n);
110
+ }
111
+ function h(s) {
112
+ w[s ? "unshift" : "push"](() => {
113
+ c = s, l(4, c);
114
+ });
115
+ }
116
+ return t.$$set = (s) => {
117
+ "value" in s && l(0, a = s.value), "type" in s && l(1, _ = s.type), "selected" in s && l(2, i = s.selected);
118
+ }, [a, _, i, n, c, f, h];
119
+ }
120
+ class j extends g {
121
+ constructor(e) {
122
+ super(), E(this, e, W, P, C, { value: 0, type: 1, selected: 2 });
123
+ }
124
+ }
125
+ export {
126
+ j as default
127
+ };
src/backend/gradio_aicontext/templates/example/style.css ADDED
@@ -0,0 +1 @@
 
 
1
+ .gallery.svelte-84cxb8{padding:var(--size-1) var(--size-2)}div.svelte-84cxb8{overflow:hidden;min-width:var(--local-text-width);white-space:nowrap}
src/demo/__init__.py ADDED
File without changes
src/demo/app.py ADDED
@@ -0,0 +1,106 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import gradio as gr
2
+ from gradio_aicontext import AIContext
3
+ import json
4
+ import tiktoken
5
+
6
+
7
+ def count_tokens(message: dict) -> int:
8
+ encoding = tiktoken.encoding_for_model("gpt-4")
9
+ return encoding.encode(json.dumps([message])).__len__()
10
+
11
+
12
+ initial_messages = [
13
+ {
14
+ "role": "system",
15
+ "content": "You are a helpful AI assistant skilled in Python programming and data analysis."
16
+ },
17
+ {
18
+ "role": "user",
19
+ "content": "Can you help me write a function to calculate the factorial of a number?"
20
+ },
21
+ {
22
+ "role": "assistant",
23
+ "content": "I'll help you create a factorial function. Here are two approaches:\n\n```python\n# Recursive approach\ndef factorial_recursive(n):\n if n <= 1:\n return 1\n return n * factorial_recursive(n - 1)\n\n# Iterative approach \ndef factorial_iterative(n):\n result = 1\n for i in range(1, n + 1):\n result *= i\n return result\n\n# Example usage\nprint(factorial_recursive(5)) # 120\nprint(factorial_iterative(5)) # 120\n```\n\nBoth functions work well, but the iterative version is more memory efficient for large numbers."
24
+ },
25
+ {
26
+ "role": "user",
27
+ "content": "What about using math.factorial()?"
28
+ },
29
+ {
30
+ "role": "assistant",
31
+ "content": "Great point! Python's built-in `math.factorial()` is the most efficient option:\n\n```python\nimport math\n\nresult = math.factorial(5) # 120\n```\n\nIt's optimized, handles edge cases, and is the recommended approach for production code. Use custom implementations mainly for learning or when you need specific behavior."
32
+ },
33
+ {
34
+ "role": "user",
35
+ "content": "What's the weather like in New York today?"
36
+ },
37
+ {
38
+ "type": "function_call",
39
+ "name": "get_weather",
40
+ "call_id": "call_weather_123",
41
+ "arguments": "{\"location\": \"New York, NY\", \"units\": \"fahrenheit\"}"
42
+ },
43
+ {
44
+ "type": "function_call_output",
45
+ "call_id": "call_weather_123",
46
+ "output": "{\"location\": \"New York, NY\", \"temperature\": 72, \"condition\": \"Partly Cloudy\", \"humidity\": 65, \"wind_speed\": 8, \"visibility\": 10, \"uv_index\": 6, \"feels_like\": 75}"
47
+ },
48
+ {
49
+ "role": "assistant",
50
+ "content": "The weather in New York today is quite pleasant! Here are the details:\n\n🌤️ **Partly Cloudy** - 72°F (feels like 75°F)\n💨 Wind: 8 mph\n💧 Humidity: 65%\n👁️ Visibility: 10 miles\n☀️ UV Index: 6 (moderate)\n\nIt's a nice day to be outside!"
51
+ },
52
+ {
53
+ "role": "user",
54
+ "content": "Can you also check the forecast for tomorrow and send me an email reminder?"
55
+ },
56
+ {
57
+ "type": "function_call",
58
+ "name": "get_weather_forecast",
59
+ "call_id": "call_forecast_456",
60
+ "arguments": "{\"location\": \"New York, NY\", \"days\": 1}"
61
+ },
62
+ {
63
+ "type": "function_call_output",
64
+ "call_id": "call_forecast_456",
65
+ "output": "{\"location\": \"New York, NY\", \"forecast\": [{\"date\": \"2024-10-17\", \"high\": 68, \"low\": 58, \"condition\": \"Light Rain\", \"precipitation_chance\": 80, \"wind_speed\": 12}]}"
66
+ },
67
+ {
68
+ "type": "function_call",
69
+ "name": "send_email",
70
+ "call_id": "call_email_789",
71
+ "arguments": "{\"to\": \"user@example.com\", \"subject\": \"Weather Reminder: Bring an Umbrella Tomorrow!\", \"body\": \"Tomorrow's forecast for New York: Light rain expected with 80% chance of precipitation. High: 68°F, Low: 58°F. Don't forget your umbrella!\"}"
72
+ },
73
+ {
74
+ "type": "function_call_output",
75
+ "call_id": "call_email_789",
76
+ "output": "{\"status\": \"sent\", \"message_id\": \"msg_12345\", \"delivered_at\": \"2024-10-16T20:45:30Z\"}"
77
+ }
78
+ ]
79
+
80
+ with gr.Blocks(title="AI Context Visualization Demo") as demo:
81
+ gr.Markdown("# AI Context Component Demo")
82
+ gr.Markdown("Shows token visualization for OpenAI-style conversation messages")
83
+
84
+ with gr.Row():
85
+ with gr.Column(scale=2):
86
+ json_editor = gr.JSON(
87
+ value=initial_messages,
88
+ label="Raw Messages (Editable)",
89
+ show_label=True
90
+ )
91
+
92
+ with gr.Column(scale=1):
93
+ context_viz = AIContext(
94
+ value=initial_messages,
95
+ count_tokens_fn=count_tokens
96
+ )
97
+
98
+ json_editor.change(
99
+ fn=lambda x: x,
100
+ inputs=[json_editor],
101
+ outputs=[context_viz]
102
+ )
103
+
104
+
105
+ if __name__ == "__main__":
106
+ demo.launch()
src/demo/css.css ADDED
@@ -0,0 +1,157 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ html {
2
+ font-family: Inter;
3
+ font-size: 16px;
4
+ font-weight: 400;
5
+ line-height: 1.5;
6
+ -webkit-text-size-adjust: 100%;
7
+ background: #fff;
8
+ color: #323232;
9
+ -webkit-font-smoothing: antialiased;
10
+ -moz-osx-font-smoothing: grayscale;
11
+ text-rendering: optimizeLegibility;
12
+ }
13
+
14
+ :root {
15
+ --space: 1;
16
+ --vspace: calc(var(--space) * 1rem);
17
+ --vspace-0: calc(3 * var(--space) * 1rem);
18
+ --vspace-1: calc(2 * var(--space) * 1rem);
19
+ --vspace-2: calc(1.5 * var(--space) * 1rem);
20
+ --vspace-3: calc(0.5 * var(--space) * 1rem);
21
+ }
22
+
23
+ .app {
24
+ max-width: 748px !important;
25
+ }
26
+
27
+ .prose p {
28
+ margin: var(--vspace) 0;
29
+ line-height: var(--vspace * 2);
30
+ font-size: 1rem;
31
+ }
32
+
33
+ code {
34
+ font-family: "Inconsolata", sans-serif;
35
+ font-size: 16px;
36
+ }
37
+
38
+ h1,
39
+ h1 code {
40
+ font-weight: 400;
41
+ line-height: calc(2.5 / var(--space) * var(--vspace));
42
+ }
43
+
44
+ h1 code {
45
+ background: none;
46
+ border: none;
47
+ letter-spacing: 0.05em;
48
+ padding-bottom: 5px;
49
+ position: relative;
50
+ padding: 0;
51
+ }
52
+
53
+ h2 {
54
+ margin: var(--vspace-1) 0 var(--vspace-2) 0;
55
+ line-height: 1em;
56
+ }
57
+
58
+ h3,
59
+ h3 code {
60
+ margin: var(--vspace-1) 0 var(--vspace-2) 0;
61
+ line-height: 1em;
62
+ }
63
+
64
+ h4,
65
+ h5,
66
+ h6 {
67
+ margin: var(--vspace-3) 0 var(--vspace-3) 0;
68
+ line-height: var(--vspace);
69
+ }
70
+
71
+ .bigtitle,
72
+ h1,
73
+ h1 code {
74
+ font-size: calc(8px * 4.5);
75
+ word-break: break-word;
76
+ }
77
+
78
+ .title,
79
+ h2,
80
+ h2 code {
81
+ font-size: calc(8px * 3.375);
82
+ font-weight: lighter;
83
+ word-break: break-word;
84
+ border: none;
85
+ background: none;
86
+ }
87
+
88
+ .subheading1,
89
+ h3,
90
+ h3 code {
91
+ font-size: calc(8px * 1.8);
92
+ font-weight: 600;
93
+ border: none;
94
+ background: none;
95
+ letter-spacing: 0.1em;
96
+ text-transform: uppercase;
97
+ }
98
+
99
+ h2 code {
100
+ padding: 0;
101
+ position: relative;
102
+ letter-spacing: 0.05em;
103
+ }
104
+
105
+ blockquote {
106
+ font-size: calc(8px * 1.1667);
107
+ font-style: italic;
108
+ line-height: calc(1.1667 * var(--vspace));
109
+ margin: var(--vspace-2) var(--vspace-2);
110
+ }
111
+
112
+ .subheading2,
113
+ h4 {
114
+ font-size: calc(8px * 1.4292);
115
+ text-transform: uppercase;
116
+ font-weight: 600;
117
+ }
118
+
119
+ .subheading3,
120
+ h5 {
121
+ font-size: calc(8px * 1.2917);
122
+ line-height: calc(1.2917 * var(--vspace));
123
+
124
+ font-weight: lighter;
125
+ text-transform: uppercase;
126
+ letter-spacing: 0.15em;
127
+ }
128
+
129
+ h6 {
130
+ font-size: calc(8px * 1.1667);
131
+ font-size: 1.1667em;
132
+ font-weight: normal;
133
+ font-style: italic;
134
+ font-family: "le-monde-livre-classic-byol", serif !important;
135
+ letter-spacing: 0px !important;
136
+ }
137
+
138
+ #start .md > *:first-child {
139
+ margin-top: 0;
140
+ }
141
+
142
+ h2 + h3 {
143
+ margin-top: 0;
144
+ }
145
+
146
+ .md hr {
147
+ border: none;
148
+ border-top: 1px solid var(--block-border-color);
149
+ margin: var(--vspace-2) 0 var(--vspace-2) 0;
150
+ }
151
+ .prose ul {
152
+ margin: var(--vspace-2) 0 var(--vspace-1) 0;
153
+ }
154
+
155
+ .gap {
156
+ gap: 0;
157
+ }
src/demo/space.py ADDED
@@ -0,0 +1,228 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ import gradio as gr
3
+ from app import demo as app
4
+ import os
5
+
6
+ _docs = {'AIContext': {'description': 'Creates an AI context visualization component showing message stack with token counts.', 'members': {'__init__': {'value': {'type': 'list | dict | Callable | None', 'default': 'None', 'description': 'list of messages or dict containing messages. If a function is provided, the function will be called each time the app loads to set the initial value of this component.'}, 'count_tokens_fn': {'type': 'Callable[[Any], int] | None', 'default': 'None', 'description': 'function to count tokens in a message. If None, uses character count / 4 heuristic.'}, 'label': {'type': 'str | I18nData | None', 'default': '"AI Context"', 'description': 'the label for this component, displayed above the component if `show_label` is `True`.'}, 'every': {'type': 'Timer | float | None', 'default': 'None', 'description': 'Continously calls `value` to recalculate it if `value` is a function (has no effect otherwise).'}, 'inputs': {'type': 'Component | Sequence[Component] | set[Component] | None', 'default': 'None', 'description': 'Components that are used as inputs to calculate `value` if `value` is a function (has no effect otherwise).'}, 'scale': {'type': 'int | None', 'default': 'None', 'description': 'relative size compared to adjacent Components.'}, 'min_width': {'type': 'int', 'default': '160', 'description': 'minimum pixel width.'}, 'interactive': {'type': 'bool | None', 'default': 'False', 'description': 'if True, will be rendered as interactive; if False, will be read-only.'}, 'visible': {'type': 'bool | Literal["hidden"]', 'default': 'True', 'description': 'If False, component will be hidden.'}, 'elem_id': {'type': 'str | None', 'default': 'None', 'description': 'An optional string that is assigned as the id of this component in the HTML DOM.'}, 'elem_classes': {'type': 'list[str] | str | None', 'default': 'None', 'description': 'An optional list of strings that are assigned as the classes of this component in the HTML DOM.'}, 'render': {'type': 'bool', 'default': 'True', 'description': 'If False, component will not render be rendered in the Blocks context.'}, 'key': {'type': 'int | str | tuple[int | str, ...] | None', 'default': 'None', 'description': 'in a gr.render, Components with the same key across re-renders are treated as the same component.'}}, 'postprocess': {'value': {'type': 'typing.Any', 'description': 'messages list or dict containing messages.'}}, 'preprocess': {'return': {'type': 'typing.Any', 'description': 'Passes the data through unchanged.'}, 'value': None}}, 'events': {'change': {'type': None, 'default': None, 'description': 'Triggered when the value of the AIContext changes either because of user input (e.g. a user types in a textbox) OR because of a function update (e.g. an image receives a value from the output of an event trigger). See `.input()` for a listener that is only triggered by user input.'}}}, '__meta__': {'additional_interfaces': {}, 'user_fn_refs': {'AIContext': []}}}
7
+
8
+ abs_path = os.path.join(os.path.dirname(__file__), "css.css")
9
+
10
+ with gr.Blocks(
11
+ css=abs_path,
12
+ theme=gr.themes.Default(
13
+ font_mono=[
14
+ gr.themes.GoogleFont("Inconsolata"),
15
+ "monospace",
16
+ ],
17
+ ),
18
+ ) as demo:
19
+ gr.Markdown(
20
+ """
21
+ # `gradio_aicontext`
22
+
23
+ <div style="display: flex; gap: 7px;">
24
+ <img alt="Static Badge" src="https://img.shields.io/badge/version%20-%200.0.1%20-%20orange">
25
+ </div>
26
+
27
+ Custom gradio component to visualize your ai context by colours and tokens' size.
28
+ """, elem_classes=["md-custom"], header_links=True)
29
+ app.render()
30
+ gr.Markdown(
31
+ """
32
+ ## Installation
33
+
34
+ ```bash
35
+ pip install gradio_aicontext
36
+ ```
37
+
38
+ ## Usage
39
+
40
+ ```python
41
+ import gradio as gr
42
+ from gradio_aicontext import AIContext
43
+ import json
44
+ import tiktoken
45
+
46
+
47
+ def count_tokens(message: dict) -> int:
48
+ encoding = tiktoken.encoding_for_model("gpt-4")
49
+ return encoding.encode(json.dumps([message])).__len__()
50
+
51
+
52
+ initial_messages = [
53
+ {
54
+ "role": "system",
55
+ "content": "You are a helpful AI assistant skilled in Python programming and data analysis."
56
+ },
57
+ {
58
+ "role": "user",
59
+ "content": "Can you help me write a function to calculate the factorial of a number?"
60
+ },
61
+ {
62
+ "role": "assistant",
63
+ "content": "I'll help you create a factorial function. Here are two approaches:\n\n```python\n# Recursive approach\ndef factorial_recursive(n):\n if n <= 1:\n return 1\n return n * factorial_recursive(n - 1)\n\n# Iterative approach \ndef factorial_iterative(n):\n result = 1\n for i in range(1, n + 1):\n result *= i\n return result\n\n# Example usage\nprint(factorial_recursive(5)) # 120\nprint(factorial_iterative(5)) # 120\n```\n\nBoth functions work well, but the iterative version is more memory efficient for large numbers."
64
+ },
65
+ {
66
+ "role": "user",
67
+ "content": "What about using math.factorial()?"
68
+ },
69
+ {
70
+ "role": "assistant",
71
+ "content": "Great point! Python's built-in `math.factorial()` is the most efficient option:\n\n```python\nimport math\n\nresult = math.factorial(5) # 120\n```\n\nIt's optimized, handles edge cases, and is the recommended approach for production code. Use custom implementations mainly for learning or when you need specific behavior."
72
+ },
73
+ {
74
+ "role": "user",
75
+ "content": "What's the weather like in New York today?"
76
+ },
77
+ {
78
+ "type": "function_call",
79
+ "name": "get_weather",
80
+ "call_id": "call_weather_123",
81
+ "arguments": "{\"location\": \"New York, NY\", \"units\": \"fahrenheit\"}"
82
+ },
83
+ {
84
+ "type": "function_call_output",
85
+ "call_id": "call_weather_123",
86
+ "output": "{\"location\": \"New York, NY\", \"temperature\": 72, \"condition\": \"Partly Cloudy\", \"humidity\": 65, \"wind_speed\": 8, \"visibility\": 10, \"uv_index\": 6, \"feels_like\": 75}"
87
+ },
88
+ {
89
+ "role": "assistant",
90
+ "content": "The weather in New York today is quite pleasant! Here are the details:\n\n🌤️ **Partly Cloudy** - 72°F (feels like 75°F)\n💨 Wind: 8 mph\n💧 Humidity: 65%\n👁️ Visibility: 10 miles\n☀️ UV Index: 6 (moderate)\n\nIt's a nice day to be outside!"
91
+ },
92
+ {
93
+ "role": "user",
94
+ "content": "Can you also check the forecast for tomorrow and send me an email reminder?"
95
+ },
96
+ {
97
+ "type": "function_call",
98
+ "name": "get_weather_forecast",
99
+ "call_id": "call_forecast_456",
100
+ "arguments": "{\"location\": \"New York, NY\", \"days\": 1}"
101
+ },
102
+ {
103
+ "type": "function_call_output",
104
+ "call_id": "call_forecast_456",
105
+ "output": "{\"location\": \"New York, NY\", \"forecast\": [{\"date\": \"2024-10-17\", \"high\": 68, \"low\": 58, \"condition\": \"Light Rain\", \"precipitation_chance\": 80, \"wind_speed\": 12}]}"
106
+ },
107
+ {
108
+ "type": "function_call",
109
+ "name": "send_email",
110
+ "call_id": "call_email_789",
111
+ "arguments": "{\"to\": \"user@example.com\", \"subject\": \"Weather Reminder: Bring an Umbrella Tomorrow!\", \"body\": \"Tomorrow's forecast for New York: Light rain expected with 80% chance of precipitation. High: 68°F, Low: 58°F. Don't forget your umbrella!\"}"
112
+ },
113
+ {
114
+ "type": "function_call_output",
115
+ "call_id": "call_email_789",
116
+ "output": "{\"status\": \"sent\", \"message_id\": \"msg_12345\", \"delivered_at\": \"2024-10-16T20:45:30Z\"}"
117
+ }
118
+ ]
119
+
120
+ with gr.Blocks(title="AI Context Visualization Demo") as demo:
121
+ gr.Markdown("# AI Context Component Demo")
122
+ gr.Markdown("Shows token visualization for OpenAI-style conversation messages")
123
+
124
+ with gr.Row():
125
+ with gr.Column(scale=2):
126
+ json_editor = gr.JSON(
127
+ value=initial_messages,
128
+ label="Raw Messages (Editable)",
129
+ show_label=True
130
+ )
131
+
132
+ with gr.Column(scale=1):
133
+ context_viz = AIContext(
134
+ value=initial_messages,
135
+ count_tokens_fn=count_tokens
136
+ )
137
+
138
+ json_editor.change(
139
+ fn=lambda x: x,
140
+ inputs=[json_editor],
141
+ outputs=[context_viz]
142
+ )
143
+
144
+
145
+ if __name__ == "__main__":
146
+ demo.launch()
147
+
148
+ ```
149
+ """, elem_classes=["md-custom"], header_links=True)
150
+
151
+
152
+ gr.Markdown("""
153
+ ## `AIContext`
154
+
155
+ ### Initialization
156
+ """, elem_classes=["md-custom"], header_links=True)
157
+
158
+ gr.ParamViewer(value=_docs["AIContext"]["members"]["__init__"], linkify=[])
159
+
160
+
161
+ gr.Markdown("### Events")
162
+ gr.ParamViewer(value=_docs["AIContext"]["events"], linkify=['Event'])
163
+
164
+
165
+
166
+
167
+ gr.Markdown("""
168
+
169
+ ### User function
170
+
171
+ The impact on the users predict function varies depending on whether the component is used as an input or output for an event (or both).
172
+
173
+ - When used as an Input, the component only impacts the input signature of the user function.
174
+ - When used as an output, the component only impacts the return signature of the user function.
175
+
176
+ The code snippet below is accurate in cases where the component is used as both an input and an output.
177
+
178
+ - **As input:** Is passed, passes the data through unchanged.
179
+ - **As output:** Should return, messages list or dict containing messages.
180
+
181
+ ```python
182
+ def predict(
183
+ value: typing.Any
184
+ ) -> typing.Any:
185
+ return value
186
+ ```
187
+ """, elem_classes=["md-custom", "AIContext-user-fn"], header_links=True)
188
+
189
+
190
+
191
+
192
+ demo.load(None, js=r"""function() {
193
+ const refs = {};
194
+ const user_fn_refs = {
195
+ AIContext: [], };
196
+ requestAnimationFrame(() => {
197
+
198
+ Object.entries(user_fn_refs).forEach(([key, refs]) => {
199
+ if (refs.length > 0) {
200
+ const el = document.querySelector(`.${key}-user-fn`);
201
+ if (!el) return;
202
+ refs.forEach(ref => {
203
+ el.innerHTML = el.innerHTML.replace(
204
+ new RegExp("\\b"+ref+"\\b", "g"),
205
+ `<a href="#h-${ref.toLowerCase()}">${ref}</a>`
206
+ );
207
+ })
208
+ }
209
+ })
210
+
211
+ Object.entries(refs).forEach(([key, refs]) => {
212
+ if (refs.length > 0) {
213
+ const el = document.querySelector(`.${key}`);
214
+ if (!el) return;
215
+ refs.forEach(ref => {
216
+ el.innerHTML = el.innerHTML.replace(
217
+ new RegExp("\\b"+ref+"\\b", "g"),
218
+ `<a href="#h-${ref.toLowerCase()}">${ref}</a>`
219
+ );
220
+ })
221
+ }
222
+ })
223
+ })
224
+ }
225
+
226
+ """)
227
+
228
+ demo.launch()
src/frontend/Bar.svelte ADDED
@@ -0,0 +1,88 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <script lang="ts">
2
+ export let role: string;
3
+ export let type: string = "";
4
+ export let name: string = "";
5
+ export let tokens: number;
6
+ export let height: number;
7
+
8
+ const colors: Record<string, string> = {
9
+ "system": "#BEF1FE",
10
+ "user": "#CFFFD9",
11
+ "assistant": "#FEFEB1",
12
+ "function_call": "#E9D8FF",
13
+ "function_result": "#E9D8FF"
14
+ };
15
+
16
+ $: color = type === "function_call" ? colors["function_call"] :
17
+ type === "function_call_output" ? colors["function_result"] :
18
+ role === "system" ? colors["system"] :
19
+ role === "user" ? colors["user"] :
20
+ colors["assistant"];
21
+
22
+ $: displayType = type === "function_call" ? name || "unknown" :
23
+ type === "function_call_output" ? "result" :
24
+ role;
25
+
26
+ let hovered = false;
27
+ </script>
28
+
29
+ <div
30
+ class="bar"
31
+ style:background={color}
32
+ style:height="{height}px"
33
+ style:border="1px solid {color}44"
34
+ style:transform={hovered ? "scale(1.02)" : "scale(1)"}
35
+ on:mouseenter={() => hovered = true}
36
+ on:mouseleave={() => hovered = false}
37
+ >
38
+ <div class="content">
39
+ {#if type === "function_call"}
40
+ <pre class="func-name">{displayType}</pre>
41
+ {:else}
42
+ <div class="type">{displayType}</div>
43
+ {/if}
44
+ <div class="tokens">/ {tokens} tokens</div>
45
+ </div>
46
+ </div>
47
+
48
+ <style>
49
+ .bar {
50
+ width: 100%;
51
+ margin: 8px 0;
52
+ border-radius: 8px;
53
+ display: flex;
54
+ align-items: center;
55
+ justify-content: center;
56
+ font-size: 12px;
57
+ font-weight: 600;
58
+ transition: all 0.2s ease;
59
+ cursor: default;
60
+ }
61
+
62
+ .content {
63
+ display: flex;
64
+ align-items: center;
65
+ justify-content: center;
66
+ gap: 6px;
67
+ line-height: 1.2;
68
+ }
69
+
70
+ .type {
71
+ font-size: 12px;
72
+ font-weight: 700;
73
+ color: black;
74
+ }
75
+
76
+ .func-name {
77
+ color: black;
78
+ font-size: 10px;
79
+ margin: 0;
80
+ }
81
+
82
+ .tokens {
83
+ font-size: 10px;
84
+ opacity: 0.9;
85
+ font-weight: 500;
86
+ color: black;
87
+ }
88
+ </style>
src/frontend/Example.svelte ADDED
@@ -0,0 +1,52 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <script lang="ts">
2
+ import { onMount } from "svelte";
3
+
4
+ export let value: string | null;
5
+ export let type: "gallery" | "table";
6
+ export let selected = false;
7
+
8
+ let size: number;
9
+ let el: HTMLDivElement;
10
+
11
+ function set_styles(element: HTMLElement, el_width: number): void {
12
+ element.style.setProperty(
13
+ "--local-text-width",
14
+ `${el_width && el_width < 150 ? el_width : 200}px`
15
+ );
16
+ element.style.whiteSpace = "unset";
17
+ }
18
+
19
+ function truncate_text(text: string | null, max_length = 60): string {
20
+ if (!text) return "";
21
+ const str = String(text);
22
+ if (str.length <= max_length) return str;
23
+ return str.slice(0, max_length) + "...";
24
+ }
25
+
26
+ onMount(() => {
27
+ set_styles(el, size);
28
+ });
29
+ </script>
30
+
31
+ <div
32
+ bind:clientWidth={size}
33
+ bind:this={el}
34
+ class:table={type === "table"}
35
+ class:gallery={type === "gallery"}
36
+ class:selected
37
+ >
38
+ {truncate_text(value)}
39
+ </div>
40
+
41
+ <style>
42
+ .gallery {
43
+ padding: var(--size-1) var(--size-2);
44
+ }
45
+
46
+ div {
47
+ overflow: hidden;
48
+ min-width: var(--local-text-width);
49
+
50
+ white-space: nowrap;
51
+ }
52
+ </style>
src/frontend/Index.svelte ADDED
@@ -0,0 +1,146 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <svelte:options accessors={true} />
2
+
3
+ <script lang="ts">
4
+ import { Block } from "@gradio/atoms";
5
+ import type { LoadingStatus } from "@gradio/statustracker";
6
+ import { StatusTracker } from "@gradio/statustracker";
7
+ import type { Gradio } from "@gradio/utils";
8
+ import Bar from "./Bar.svelte";
9
+
10
+ export let gradio: Gradio<{
11
+ change: never;
12
+ submit: never;
13
+ input: never;
14
+ clear_status: LoadingStatus;
15
+ }>;
16
+ export let elem_id = "";
17
+ export let elem_classes: string[] = [];
18
+ export let visible: boolean | "hidden" = true;
19
+ export let value: any = null;
20
+ export let scale: number | null = null;
21
+ export let min_width: number | undefined = undefined;
22
+ export let loading_status: LoadingStatus | undefined = undefined;
23
+ export let interactive: boolean = false;
24
+
25
+ $: messages = value?.messages || [];
26
+ $: tokens = value?.tokens_count || [];
27
+ $: totalTokens = tokens.reduce((sum: number, t: number) => sum + t, 0);
28
+ $: heights = tokens.map((count: number) =>
29
+ Math.min(Math.max(20, count * 0.2), 400)
30
+ );
31
+
32
+ function handle_change(): void {
33
+ gradio.dispatch("change");
34
+ }
35
+
36
+ $: value, handle_change();
37
+ </script>
38
+
39
+ <Block
40
+ {visible}
41
+ {elem_id}
42
+ {elem_classes}
43
+ {scale}
44
+ {min_width}
45
+ allow_overflow={false}
46
+ padding={false}
47
+ >
48
+ {#if loading_status}
49
+ <StatusTracker
50
+ autoscroll={gradio.autoscroll}
51
+ i18n={gradio.i18n}
52
+ {...loading_status}
53
+ on:clear_status={() => gradio.dispatch("clear_status", loading_status)}
54
+ />
55
+ {/if}
56
+
57
+ <div class="context-container">
58
+ {#if messages.length === 0}
59
+ <div class="empty-state">No messages yet, start chatting!</div>
60
+ {:else}
61
+ <div class="header">
62
+ <h4 class="title">📚 Context Stack</h4>
63
+ <span class="count-badge">{messages.length}</span>
64
+ <span class="token-count">{totalTokens} tokens</span>
65
+ </div>
66
+ <div class="bars-container">
67
+ {#each messages as message, i}
68
+ <Bar
69
+ role={message.role || "unknown"}
70
+ type={message.type || ""}
71
+ name={message.name || ""}
72
+ tokens={tokens[i]}
73
+ height={heights[i]}
74
+ />
75
+ {/each}
76
+ </div>
77
+ {/if}
78
+ </div>
79
+ </Block>
80
+
81
+ <style>
82
+ .context-container {
83
+ display: flex;
84
+ width: 100%;
85
+ height: 100%;
86
+ flex-direction: column;
87
+ background: var(--background-fill-secondary);
88
+ border-radius: var(--block-radius);
89
+ box-shadow: var(--shadow-drop);
90
+ height: 100%;
91
+ }
92
+
93
+ .empty-state {
94
+ height: 100%;
95
+ width: 100%;
96
+ display: flex;
97
+ align-items: center;
98
+ justify-content: center;
99
+ color: var(--neutral-500);
100
+ font-size: 14px;
101
+ background: var(--background-fill-primary);
102
+ border: 1px solid var(--border-color-primary);
103
+ border-radius: 12px;
104
+ padding: 40px;
105
+ }
106
+
107
+ .header {
108
+ display: flex;
109
+ align-items: center;
110
+ gap: 10px;
111
+ padding: 16px 16px 12px 16px;
112
+ border-bottom: 2px solid var(--border-color-primary);
113
+ flex-wrap: wrap;
114
+ flex-shrink: 0;
115
+ }
116
+
117
+ .title {
118
+ margin: 0;
119
+ color: var(--body-text-color);
120
+ font-size: 14px;
121
+ font-weight: 600;
122
+ }
123
+
124
+ .count-badge {
125
+ background: var(--color-accent);
126
+ color: white;
127
+ padding: 4px 10px;
128
+ border-radius: 12px;
129
+ font-size: 10px;
130
+ font-weight: 600;
131
+ }
132
+
133
+ .token-count {
134
+ margin: 0;
135
+ color: var(--body-text-color);
136
+ font-size: 12px;
137
+ font-weight: 400;
138
+ }
139
+
140
+ .bars-container {
141
+ flex: 1;
142
+ overflow-y: auto;
143
+ padding: 0 16px 16px 16px;
144
+ min-height: 0;
145
+ }
146
+ </style>
src/frontend/gradio.config.js ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ export default {
2
+ plugins: [],
3
+ svelte: {
4
+ preprocess: [],
5
+ },
6
+ build: {
7
+ target: "modules",
8
+ },
9
+ };
src/frontend/package-lock.json ADDED
The diff for this file is too large to render. See raw diff
 
src/frontend/package.json ADDED
@@ -0,0 +1,40 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "name": "gradio_aicontext",
3
+ "version": "0.3.30",
4
+ "description": "Gradio UI packages",
5
+ "type": "module",
6
+ "author": "",
7
+ "license": "ISC",
8
+ "private": false,
9
+ "main_changeset": true,
10
+ "exports": {
11
+ ".": {
12
+ "gradio": "./Index.svelte",
13
+ "svelte": "./dist/Index.svelte",
14
+ "types": "./dist/Index.svelte.d.ts"
15
+ },
16
+ "./example": {
17
+ "gradio": "./Example.svelte",
18
+ "svelte": "./dist/Example.svelte",
19
+ "types": "./dist/Example.svelte.d.ts"
20
+ },
21
+ "./package.json": "./package.json"
22
+ },
23
+ "dependencies": {
24
+ "@gradio/atoms": "0.18.1",
25
+ "@gradio/icons": "0.14.0",
26
+ "@gradio/statustracker": "0.11.1",
27
+ "@gradio/utils": "0.10.2"
28
+ },
29
+ "devDependencies": {
30
+ "@gradio/preview": "0.14.0"
31
+ },
32
+ "peerDependencies": {
33
+ "svelte": "^4.0.0"
34
+ },
35
+ "repository": {
36
+ "type": "git",
37
+ "url": "git+https://github.com/gradio-app/gradio.git",
38
+ "directory": "js/simpletextbox"
39
+ }
40
+ }
src/frontend/tsconfig.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "compilerOptions": {
3
+ "allowJs": true,
4
+ "checkJs": true,
5
+ "esModuleInterop": true,
6
+ "forceConsistentCasingInFileNames": true,
7
+ "resolveJsonModule": true,
8
+ "skipLibCheck": true,
9
+ "sourceMap": true,
10
+ "strict": true,
11
+ "verbatimModuleSyntax": true
12
+ },
13
+ "exclude": ["node_modules", "dist", "./gradio.config.js"]
14
+ }
src/package-lock.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "name": "aicontext",
3
+ "lockfileVersion": 3,
4
+ "requires": true,
5
+ "packages": {}
6
+ }
src/pyproject.toml ADDED
@@ -0,0 +1,51 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [build-system]
2
+ requires = [
3
+ "hatchling",
4
+ "hatch-requirements-txt",
5
+ "hatch-fancy-pypi-readme>=22.5.0",
6
+ ]
7
+ build-backend = "hatchling.build"
8
+
9
+ [project]
10
+ name = "gradio_aicontext"
11
+ version = "0.0.1"
12
+ description = "Custom gradio component to visualize your ai context by colours and tokens' size."
13
+ readme = "README.md"
14
+ license = "Apache-2.0"
15
+ requires-python = ">=3.8"
16
+ authors = [{ name = "YOUR NAME", email = "YOUREMAIL@domain.com" }]
17
+ keywords = ["gradio-custom-component", "gradio-template-SimpleTextbox"]
18
+ # Add dependencies here
19
+ dependencies = ["gradio>=4.0,<6.0"]
20
+ classifiers = [
21
+ 'Development Status :: 3 - Alpha',
22
+ 'Operating System :: OS Independent',
23
+ 'Programming Language :: Python :: 3',
24
+ 'Programming Language :: Python :: 3 :: Only',
25
+ 'Programming Language :: Python :: 3.8',
26
+ 'Programming Language :: Python :: 3.9',
27
+ 'Programming Language :: Python :: 3.10',
28
+ 'Programming Language :: Python :: 3.11',
29
+ 'Topic :: Scientific/Engineering',
30
+ 'Topic :: Scientific/Engineering :: Artificial Intelligence',
31
+ 'Topic :: Scientific/Engineering :: Visualization',
32
+ ]
33
+
34
+ # The repository and space URLs are optional, but recommended.
35
+ # Adding a repository URL will create a badge in the auto-generated README that links to the repository.
36
+ # Adding a space URL will create a badge in the auto-generated README that links to the space.
37
+ # This will make it easy for people to find your deployed demo or source code when they
38
+ # encounter your project in the wild.
39
+
40
+ # [project.urls]
41
+ # repository = "your github repository"
42
+ # space = "your space url"
43
+
44
+ [project.optional-dependencies]
45
+ dev = ["build", "twine"]
46
+
47
+ [tool.hatch.build]
48
+ artifacts = ["/backend/gradio_aicontext/templates", "*.pyi"]
49
+
50
+ [tool.hatch.build.targets.wheel]
51
+ packages = ["/backend/gradio_aicontext"]